
Project Overview
User Story
As an AWS Supply Chain Data Manager,
I want to be able to connect data sources easily, map data schema, and monitor ingestion errors,
So that I can enable my business users to review insights and create demand plans without issue
Project Summary
AWS Supply Chain is a set of applications that work individually and together to provide various functionality for different personas across supply chain management. The first step to opening those applications to business users is getting data into AWS SC to ensure the applications can be used consistently without disrupting the business users’ workflow.
The AWS Supply Chain application aimed to complement the customers’ current supply chain ERP (Enterprise Resource Planning) tools and provide contextual insights based on historic and projected inventory trends.
To provide those insights, our customers need to get data into AWS quickly, in the correct format, so we can use that data to provide insights for our customers. Traditionally, getting data from the source to a new application tends to be a long process and can be frustrating for the primary users, the Data Steward.
The work done in 2022 is the first step in getting to an automated data ingestion modeling application at AWS. The long-term vision is to be able to take a piece of source data and automatically map it to the destination for the customer to review and confirm, rather than starting from scratch and mapping each dataflow one by one, which is why it is so time-consuming.
Learning the space
Being Curious
Right when I landed on the team I knew I had a lot to catch up on. For the first 8 months of my time on the team I dedicated a lot to learning the spaces we were in. I worked with PMs and Product Subject Matter Experts to learn about.
-
ERP – Enterprise Resource Planning
-
ETL – Extract; Transform; Load – A method of data transformation
-
Demand Planning
-
Supply Planning
Building a Datalake
Like any project I am on, I am always sketching and working through concepts and this project was no different. I started iterating on the customer journey, what the model would look like if it took more than a single session to complete, how we can communicate complex data mappings on the front end, and much more.
I worked with my PM partners to make sure I understood the primary goals of the first release of our Supply Chain Datalake (SCDL) and Auto-association.
Break things apart // Simplify
After working with internal partners to define what was (1) most important to deliver for our first release and (2) where things get tricky when working through a data set-up workflow, we came to the conclusion that the first set of features we would aim to deliver were.
Simplified workflows for
-
Creating the connection (E)
-
Mapping and Transformation (T)
-
Data Ingestion (L)
Visual connection health
-
First step in the Data Quality Dashboard’s long-term vision
Audit Trail
Process

Early Iterations
Quickly iterating through different thoughts on mapping screens was the first goal. We had a few limitations to work through but we were able to get a couple versions we could share with our Amazon Forecast partners to get some feedback on.
Those screens ultimatley changed as we found some big gaps in the designs but the feedback was amazing.
Sketches
Diagrams
Explorations
Mid-Project Iterations
After pivoting to breaking apart the workflow into smaller subsections of the overall mapping and ingestion steps we started to review with internal stakeholders and partner teams again. The feedback this round was direct and detailed, which was helpful, but with some of the strategic placement of our engineers, we had to trim back the feature set for our first release but will be adding more features back as ‘fast-follows’ after our release announcement at Re:Invent.
Sketches
Delivered Designs
Now that we had a focused set of features and were running up against a delivery date I moved into prototyping a final direction for Re:Invent with the understanding we will be adding features quickly post Re:Invent and pivoting to a datafirst model by Q4 2023; a known pivot due to urgent first release requirements.
The primary features for our private beta release are listed below.
Creating Connections
The first step in getting data in to our data model is creating a connection. To do this, we wanted to simplify the connection process to quickly connect via API where possible while also allowing the user to dump full datasets into an Amazon S3 Supply Chain bucket and extract data from that bucket into AWS Supply Chain.
Recipe Management (Transformations MLP)
Once a connection is established, the users will need to create the Recipes, what we call the transformation schema from the source to destination. Though our feature set for release 1 is limited, we will expand this feature extensively from Q2 2023 and beyond, but we had to start somewhere!
EDI Message Auto-Association
The long-term vision of the product is automating data ingestion for all data connections and source datasets as much as possible. This is a revolutionary concept that does not exist today. This is the very first step in allowing the user to drop a source EDI message (datafile) into our Recipe Manager and AWS Supply Chain returns a fully mapped version that maps to our datamodel in under 4 seconds. Once released, we will learn a lot, but it is very exciting!
Audit Trail and Error Identification
One of the essential applications within our Datalake management space in AWS Supply Chain is the ability to provide the user with as much information around errors as possible. Without this feature, the user would have no idea what and why things are failing and that would be a pretty poor experience. We aimed to deliver this feature above some others as it would have made the product unusable and given our release deadline, we wanted to make sure to get our a product out in the world for ongoing feedback.
Follow Through
With the launch criteria finalized for Re:Invent, we needed to act quickly to deliver the final designs to our engineering teams. Although they had been closely involved throughout the design process, it was now crucial to provide them with comprehensive design guidance in a short timeframe to ensure they had adequate runway for implementation.
The AWS Supply Chain Launch Announcement Video.
Adam Selipsky
AWS SC Demo
GM and Product Lead deliver a walk-through of AWS Supply Chain v1.
Learnings
Humans
I learned that Supply Chain users are both creatures of habit and are also searching for better and easier ways to do their job. We aimed to get AWS SC Insights out quickly so we can start getting broader sets of feedback to improve the overall functionatlity and provide better ways to work for our users.
Process
- I have always had a very fluid process, and that has helped me be a successful designer in many different product areas, but what this launch helped me learn was the following.
- Presenting to leadership requires very detailed attention to every word and pixel.
- Being a Design Led organization comes with its benefits and its challenges
- Learning from all interactions across every piece of the team is critical to success
- Diagrams help; build, evaluate, and maintain
Myself as a Designer
- From the time I stepped onto the team through the massive hiring effort and now leading my team I have learned so much from this work and this project.
- I can design complex systems that scale far into the future
Follow-through is critical, not only for design delivery but everything I was responsible for - My experiences over the past 18 years were critical to my success
- Flexibility within reason helps when you support 8 tech teams
Next Steps
Fast Follows
Once we delivered the final designs in September of 2022, we immediately started working on the deprioritized items and pulled them back in to a prototype to start reviewing with internal teams right away. After a few iterations, we were able to deliver the feature set early for a release in Q1 2023.
Next Steps
-
Data Connection Management enhancements
-
Recipe management enhancements
-
Navigational adjustments
-
Audit trail and error handling enhancements
-
Raw file uploaded in the data ingestion manager

In Conclusion
In the end, it was pretty insane that I was able to be a part of this project. Looking way back when I was doodling buildings in San Francisco I would have never thought I would be a part of a team of this magnitude and talent. I was honored to have worked with so many talented people and customers through this project and can’t wait to use all the new things I learned.
-M