
AI has made such remarkable progress in 2022 that the world seems to have finally accepted that this technology is now accessible for everyday use by the average person.
Generative Ai models such as GPT3 and Stable Diffusion will give us the ability to create at a scale and speed that was previously unimaginable. This is about to fundamentally change how humans create value. Every single digital tool that we use for creation – from coding environments to video editors to 3D modelling software- is about to undergo to radical change.
While showing off demos of AI-generated art is impressive, demos do not take into account actual professional workflows and industry requirements. Access to these models through a generic interface is great for experimentation and creating social media posts, but it is not enough for professionals from different fields to do their job effectively. For that, we need these models in be woven into products such that they provide real utility within professional workflows.
This realization dawned on me when, as someone who knows very little about the fashion business, I attempted to use Stable Diffusion for fashion design. A generative AI product for fashion designers with no limits to creativity seemed like an obvious idea to me. Enthusiastic, I generated this image of a dress using Stable Diffusion and shared it with my fashion designer friend.

"Can you use this idea to design a new dress like this?", I asked. Her answer was a big, disappointing "No". Other than perhaps using it in a mood board for inspiration, this image was absolutely useless to her.
So for all the generative prowess of Stable Diffusion, its output in the current format is not very useful to a fashion designer. For AI to actually deliver value, we need to think about how its output can be used in a practical way. This may involve preprocessing the model’s inputs, post-processing the outputs, or combining them with human-generated content. The key is leveraging the unique capabilities of AI to focus on solving specific real world problems for users. Let’s explore how to make this seemingly magical new technology work for my fashion designer friend.
A framework for AI product strategy
For founders and product leaders who are working on AI products, I recommend 2 broad approaches – vertical and horizontal. The Vertical approach involves building AI assisted software products for the creative process of a specific industry or niche. Some examples are fashion print design and furniture design. Each of these niches have different requirements and workflows that we have to keep in mind.
The Horizontal approach covers products meant for a function that is required across industries, such as graphic design, advertising design or legal contract drafting. Advertising creatives are needed across industries, from automobiles to food. Here, the product doesn’t need industry-specific tooling. What’s most important is a great user interface that allows a user to do graphic design work in a fraction of the time and with higher quality than before.
Vertical AI products
While building AI powered products for a particular industry, along with deploying state of the art AI models it is critical to build the tooling required for their specific workflow. Here’s what an AI product that generates fabric print designs for the fashion industry needs to cover –
- The user must be able to feed the generative AI diffusion model with mood board images along with text prompts as inputs. The mood board and prompt text are usually derived through data analysis of current high selling trends.
- The AI model must generate seamless textures for all-over prints. A seamless texture is an image that can be placed next to itself – above, below, or side by side – without creating an obvious seam, join or boundary between the copies of the image. Here’s an example of a seamless texture I generated by tweaking the Stable Diffusion code. This image is composed of four generated images placed side by side.

- Creativity is an iterative process. A designer should be able to modify the output of each AI generation iteratively, either through manual edits or by introducing specific modifications in the subsequent AI generation.
- There need to be multiple levels of approval by users with different roles. The Head of Design may need to sign off on the final designs.
- The final output needs to be scaled up to a high enough resolution.
- Finally, the output must be converted to a vector file format that can be sent to a fabric printer.
A product for fashion designers needs this industry specific tooling built around a generative AI model’s output.
Consider another example of an AI assisting with interior design by generating images of how a room could be remodeled in a particular style. Here are excerpts from an article where professional interior designers share feedback on using an AI interior design tool.
When we asked it to create a "bohemian" living room, Horace noted that the algorithm chose a similar palette she had used for clients who wanted the same look. "I could see using this as a tool to develop images for a mood board," she added. "The references all hit the mark."
"There’s a weird buffet right next to a table, and there are two coffee tables for some reason, it doesn’t really seem functional," said Horace, of a midcentury living room. "The look is right, but you couldn’t present this to a client, like, Hey this is your room!"
Interior AI’s space-planning skills sometimes leave something to be desired – and its bathrooms don’t always include a toilet.
The tool would frequently cough up a basic approximation of a room but change the scale, eliminate a window or drop a ceiling by several feet.

From the designers’ review it is clear that interior design product’s AI models need to operate within certain constraints which are out of bounds to modification. Their review can be crisply summarized into one sentence-
In relying on a vast trove of 2D inspiration images to formulate its understanding of design, it grasps style perfectly but has a looser grip on function.
Note how the problems and constraints of the fashion designer are completely different from those of the interior designer. What is common though is the need for better functionality specific to their industry rather than focusing only on style.
For AI models like Stable Diffusion, fine tuning the model on a dataset that is relevant to an industry will be critical. In the case of fashion print designs, a designer may want to generate patterns according to a particular style, color pallet or mood. Fine tuning the generative model using a few examples will help generate designs that are much more relevant. Fine tuning may be needed even for other deep learning models such as those for object recognition and segmentation where the objects in question are not present in the training data of the out-of-the-box pre-trained model. Eventually I expect that model fine tuning tools will be integrated into every product. There is probably a massive company waiting to be built that offers model fine tuning as an API service.
Moats in vertical AI products

A product or company cannot rely on the AI model as the only competitive advantage. AI research papers and the code for their implementation are often public. OpenAI released Dall-E and Dall-E 2 models for text to image generation, and soon after the open source Stable Diffusion model was released. AI has attracted some of the world’s best minds, and the industry is progressing rapidly. It guaranteed that someone will eventually release a new and better model than the one you have currently implemented.
A massive advantage of developing an AI product for a specific industry is that it is easier to create a defensible business moat. The convenience that the tooling built for that industry gives customers also makes them less likely to leave just because a new model has been released. This buys the business time to improve the AI model output to state-of-the-art benchmarks.
Horizontal AI products
When it comes to products used for fields such as image editing or graphic design across industries, new AI based products face an uphill battle from incumbent giants. This is because products like Photoshop and other products are rapidly introducing features powered by AI. Professionals who rely on Photoshop’s extensive image editing tools will prefer to use AI features within Photoshop, rather than switch to another product. A lot of startups building horizontal AI products could be short lived because of existing giants with large distribution. If AI models are integrated into a tool that people already heavily use and pay for, they have little incentive to move to another one. Notion, a note taking app with 20 million users, recently introduced an AI writing assistant. New AI powered word processors may find it difficult to compete because of Notion’s existing large user base and network effect.
The opportunity for any new horizontal AI product lies in giving GOOD, CHEAP AND FAST output to a user who is not competent in that field. It’s a popular joke that when working with designers, you can only get two out of three.
To succeed, any new horizontal design AI product MUST deliver all three to a non-designer user. It goes without saying that the product should be easy to use. Unless it is, the user is unlikely to get a high quality output quickly. In other words, the opportunity lies in lowering the barrier to entry in a creative field that was earlier dominated by skilled professionals.
Canva is a great example of a company that has already demonstrated this. Canva enabled someone in accounting to create a good event poster. They proved that much of design, for which designers often become a bottleneck, does not require the advanced tooling of Photoshop. Canva found its niche for non-designers by building a simple product with templates and stock content that allowed them to create good content quickly and cheaply. AI feels like a Canva moment all over again, but for every creative field and on a much larger scale. Generative AI models like GPT3 and Stable Diffusion are especially relevant because non-designer users do not have to start from zero. Instead, with just a text prompt they get a starting point that is way ahead of what they could have ever created.
User interfaces are likely to undergo several radical changes after a lot of experimentation. For so many topics, ChatGPT’s conversational interface is much faster at returning an accurate answer than a Google search. Conversational interfaces look extremely promising for a variety of applications. Your itch to tell the designer that "the colour needs to be more blue" will finally be satisfied.
For generative models, art style presets, pre-fine-tuned models along with on-the-fly fine tuning capability can give the user more control over the output.
Moats in horizontal AI products
Deploying the best performing model trained on a high quality and large dataset can be quite beneficial for a horizontal AI product. Since these products need to work for users across industries, investing in AI R&D to maintain state-of-the-art model performance could be a strong moat. However, it cannot be the only moat because of the highly competitive landscape and the amount of money flowing into AI. Additional moats can be built through user experience, product positioning, distribution, community, network effects, the team’s execution and quick iteration on user feedback.
RunwayML is an AI powered video creator and editor that is doing this well. RunwayML is lowering the barrier to entry for content creation with features ranging from erasing and replacing objects to automated motion tracking in videos. Their contribution to the development of Stable Diffusion means that they are investing in pushing the boundaries of state of the art AI. Through a combination of these approaches RunwayML has acquired large customers and is building a long term moat around their business.

Much of Canva’s success can be attributed to the moat of its positioning as a tool for non-designers. This customer segment does not overlap with the target audience of Adobe Photoshop and Illustrator which are used by skilled designers. Deeply understanding and embracing their positioning helped Canva introduce features for things that designers study as a part of their training but that average users do not know, such as a color palette generator and font suggestions.
Building a thriving Community is a great and quite an underrated way to build moats. Actively engaging and growing a community can deliver profound product feedback. This is especially useful while building features with new user interfaces that are suitable for non-experts. Quick feedback loops through community engagement enable rapid product iterations.
Economic impact of generative AI products
AI tools for applications such as content recommendation, image relighting, optical character recognition and speech to text have been around for a few years and are already a part of products we use every day. The wave of AI products that actively assist our creative process for everything from graphics to code is only just beginning. The question that is on everyone’s lips but nobody seems to have a definitive answer to is how this will impact human jobs? It is an incredibly difficult question with no single answer. However I am certain that the answer does not lie at either extremes of AI replacing humans completely vs the human mind being irreplaceable. A conclusive answer that will stand the test of time is impossible in my opinion. The nuanced answer will lie somewhere in the middle, constantly moving between these polar endpoints according to how the technology progresses over time.
Digital artists have been using AI powered features in Photoshop for a while now for tasks like intelligent background removal and content aware fill. It was just under the hood and not overtly advertised as AI. Today generative AI diffusion models are able to contribute from step 1 of the creative process (let’s consider step 0 as deciding Why and What to create). While AI earlier contributed to a small percentage of the creative output, over time it will contribute ever more. I see two clear consequences of these new tools of creation:
- They will significantly reduce the barrier to entry for many fields resulting in an upsurge in the number of creators
- They will enable a creator to generate a much larger volume of output in a given time
The first consequence would mean MORE human employment, economic activity and monetary uplift. Suddenly the tools of Hollywood are in everyone’s pocket. The second consequence means a smaller number of people can do the work of many resulting in employers laying off a huge number of people. Will this result in catastrophic mass unemployment? Or will these people become independent creators as well? These things are notoriously difficult to predict. Everyone thought that AI would eliminate repetitive manual labor. The creative folks were safe. Nobody thought AI would be writing code or creating art.
The world will be divided into those who use AI and those who do not.
Those who build these products will reap great rewards. We are on the cusp of a period that is likely to be remembered as one of the most defining in human history.
If you are a product manager working on building AI products, I have authored a 3 part practical guide on how you can execute:
- Part 1: Groundwork – what makes AI Product Management different, should you use ML for your project, prerequisite knowledge for AI PMs
- Part 2: AI team management, product planning and development strategy
- Part 3: Model selection, deploying to production, model maintenance and cost management