Beyond Prototyping
with Nate Parrott
March 2026
Prototyping is an increasingly important skill for any designer working in AI. So, for our next interview, we're excited to be talking with Nate Parrott, a designer at Anthropic who's well known for his prototypes, often posted to Twitter, of wildly imaginative interfaces full of whimsy. Previously, Nate was at the Browser Company of New York, where he pioneered AI internet experiences with the Arc mobile app.
We spoke with Nate in the summer of 2025, so it's interesting to look at which aspects of this interview already feel dated, and which have stood the test of time. Along the way, we debate the enduring value of human-designed software and Nate explains why the art of AI design comes down to knowing when to let the model cook and when to put it in a baby chair.
What's a non-obvious insight you've picked up from working first-hand with LLMs? They can be kind of counter-intuitive and unpredictable.
I've done a bunch of really small random projects that are basically like, "hey, wouldn't it be cool if you could ask an LLM to generate an output in this format?" There's a lot of low hanging fruit around that. But I think the most interesting one is this project that I did maybe a year ago called 42 pages.
The idea was that AI is getting really, really good at writing code. It's getting really good at taking an objective description of what you want to happen in a program, in plain English, and just programming it.
Mhm.
So I was like, "Ok, how will this affect how we make software?" And I realized, oh — this is great. It (making software) will all be design.
Right now, if you want to make a product, somebody spends X number of hours in Figma and then somebody else spends Y times 10 hours in a code editor, because it takes so much longer to make it.
My thoughts were — can we just collapse that whole coding time? What's the process of making software? It's in the design tool, right? So, let's build a design tool that lets you create software, like fully fledged software.
You mock it up, it will be designed in such a way that it will be really, really good for an LLM to consume, and you could just click a button and have your designs be real.
You can obviously just go into Figma, screenshot your mocks, go into Claude Code, (although Claude Code didn't exist at the time) and be like, "Code this up for me". It will do an okay job, but it will usually get the visual details pretty wrong. Your expressive power of just passing off screenshots is not so great because you also have to pass off descriptions of how things work, exact colors, exact sizes, radii, and fonts, etc.
I was like, "I'm going to make a version of Figma that behaves like Figma, works like Figma, but is entirely built on top of HTML." So rather than Figma – who has their own format and their own render or whatever – let's just build a tool that lets you design and output HTML. You can just give the model the HTML you made and be like, "This is my design, it doesn't work right now, please make it work. Don't change the colors; don't change the layout. Make it while keeping the pixels the same." This way you can do what nobody else lets you do – pixel perfect design to code.
It was a really fun project. I spent a lot of time just recreating Figma UX/UI and learned a lot about how to use AI in the process. Things like how to set up the right code structure to let AI tools write the bulk of the work, because it's a lot of work to write something like that. But I never actually made anything of it — I built it out, thought it was really cool, but decided I'd disproven the thesis. So I decided not to pursue it.
When was this project started?
I think I started August 2024. I was using Copilot.
You were using Copilot?
I was. I probably should have been using Cursor but I wasn't. I think Cursor was cool to ask questions, but it was kind of like the early Replit use cases where it was like, what does this function do? It wasn't like today where I can go in and be like, "I want to add a feature for users to track their word count and goal for the week," or something like that. And it'll be like "okay."
But you said you disproved the thesis. What did you learn?
There were a couple problems. The first one was: the models were not good enough. So maybe I'll go back and revise it because they've gotten significantly better. But it was at this point where you could have it code a to-do list, but you couldn't have it code a whole application. It was good for one screen with one interaction and pretty poor for the rest.
But the other thing that I thought disproved it was, this whole new category of app came out, which was the Bolt or V0 or Lovable. All these apps where it's a prompt box and you're like, "here's the app I want," and it makes it for you. As I started seeing these crop up, I was like, wait a minute.
To me, these are the design tools that are going to win. And they're pretty crummy design tools because it's like: you type a prompt, you get a UI, whatever. The most probable UI that the AI has seen in its training data distribution for a to-do list app, you'll get that if you ask for a to-do list, right? But it's just — so. fast.
You don't do any thinking, you're just like, here's what I want, here's the problem I have to solve. Give me one version, give me 10 versions, whatever, iterate.
I think that the trends that have played out in AI for coding -- that people will gladly use AI tools when they make the grunt work much faster, even if you lose a little bit of low level control -- those things will also probably play out in design. And so the more I used 42 pages, the more I was like, there's no way that people are actually going to be designing software in the future in a tool like Figma. It's just too manual. I think the manual work will probably be automated out in favor of UIs that let the ideas shine or let you do 10 iterations in one go or something like that. So it felt like the wrong kind of design tool to me.
Every once in a while I find myself in a position where I still have to make a Figma prototype with a fixed number of states. And it's like, you're wiring the 'toggle on' position screen to the 'toggle off' position screen. And it's like, you know, I could implement this faster than I could design it.
It's interesting. I still use Figma on a daily basis, but I would say that Figma is a drawing tool. I think there was a peak where Figma was a prototyping tool.
Yeah.
But now I think we've actually passed that window and you could even just screenshot the Figma UI, and then pass that to Claude and say, prototype this. And it'll probably do a decent job.
I had a crazy one once where I skipped Figma entirely and just went from a PRD to Cursor, and it was actually good enough to get a first version working with reasonable assumptions for how the design should work.
Yeah, yeah. That's the best way to prototype with AI. A lot of the time, a screenshot is plenty. I found that increasingly, design becomes a matter of figuring out how to communicate your ideas to the model. In that same way you as a designer are thinking, "what tools am I giving the user?" and "how is my interface explaining to the user what should happen?" You have to do the same thing for the model.
Right.
For 42 pages, that meant figuring out the answers to questions like, "how do we take the user's design and put it in a format that is easiest for the model to grok as possible?" It turns out – that's HTML. Or, "what's the right way for a designer to add inline comments?" Taking a step back, it's a matter of figuring out what tools the model needs to make what the user wants a reality. What does it need? It's not going to tell you. They're pretty good at not telling you. And so you have to figure out how to put on your little user empathy hat for the model and really, you know, work your way towards that.
Maybe somebody will come up with a tool like 42 pages in six months. I'll be like fuck, I should have done that. I should have more conviction in this idea. That's just building products. You have to have the right appropriate amount of conviction. Not too much, not too little.
To the point of "Figma is becoming a drawing tool instead of a prototyping tool," I wonder what a version of 42 pages that was really focused on the drawing experience would look like?
Yeah. There's something really interesting to that. One thing I have been thinking about doing with it is leaning into this idea of letting you use prompts to design a first draft or 10 iterations of a first draft of the UI, and then giving you the tools to directly manipulate the file. So kind of similar to the thing that Figma launched then unlaunched, where it would generate designs for you that you could then go and edit, but something a little bit more open ended and much more aimed around not generating something that you would actually put out as your own work, but giving you 36 different sketches of options.
I'm a Magic the Gathering player and I've made a few vibe-coded tools for it. There's a deck building one, there's a simulation one, and it's really, really easy to get that first pass. They all kinda look the same, and then there can be a whole bunch of problems where it can be surprisingly hard to change small aspects of the design if you naïvely vibe-edit it.
I think that there's just a tremendous amount of value in the AI being able to shoot out a bunch of really quick drafts. But it would be a real shame if in that process we lost the ability to get in there and get your hands dirty and tweak something live. That's a real risk. When everybody goes straight from prompting to code, you lose that ability to move stuff around and use little sliders to perfect the colors and stuff like that. We need to figure out how to amplify that rather than saying "Oh, you know that's just going to die."
One of my favorite things in 42 pages, actually, was that because everything was HTML, you could ask it to generate custom properties, shadows or animation curves or stuff like that. New effects. And what was cool about it is that these would all just be compiled into a little function that could then run live. And so you could have little sliders to control the stuff that would all run live. So the model isn't having to recode all this stuff. You're just dragging. You're manipulating it live.
What's cool about it is that, now you can achieve all these intangible effects that you as a designer only get through direct manipulation, inside the LLM environment, where you can make anything. I hope that as vibe coding tools evolve, the tools will also give me all of the little sliders and handles that I can use to perfect the animation curves, the colors, the styles, whatever else.
What do you think about the possibility that foundation video or world foundation models will get so good that they will just render the entire UI on the fly? Would that obviate the need to continue creating the next generation of expressive design tools?
Yeah maybe it's all just going to be on demand, right? And I think we can start to see the seeds of that kind of interface today with, you know, ChatGPT. You can imagine a world in which ChatGPT can start showing you buttons, dialogues, and text boxes and trying to elicit stuff and updating its own UI.
The model isn't making the aesthetic choices yet, but you're starting to get into this world where the UI is generated on demand. Then eventually you wind up in a world where it's crazy KidPix style UI. I could go either way on what I think the future will be.
Is it going to be we all just have one little AI guy that we talk to who makes little UIs for us that are bespoke on demand, or are there still going to be people whose jobs it is to think really critically about how these interfaces and programs should be structured? It could just be that we go with this personal approach and everybody has something that's different. But I think there is a lot of value societally in us having the same tools. There's a lot of value in the fact that my Microsoft Word is the same as your Microsoft Word. And it would suck if we all had our own little Microsoft Words, even if they were perfect for us. Or maybe it doesn't? I don't know.
The other question I think is, are people going to be interested in defining this? I can imagine a world in which I'm an editor at a Big Five publishing house and I sit down for my first day at work and somebody's okay, it's your job to design your perfect editing software and editor. Just being like, "the fuck are you talking about? this is not what I'm here for." I have no desire to be a software designer. I just want the off the shelf thing that everybody else uses. This is why I don't know which way it goes. Probably it will be somewhere in the middle where you can now change things around, but you are still buying into a system that has been designed by somebody who spent more time thinking about how this stuff should be than you have. In the same way that most things that we do in our lives, we don't reinvent from first principles just because nobody has time for that.
Right, it probably won't look the same – it won't be a designer deciding exactly what every button is called and where it goes and what it does. But there's still value for someone to get to know their users really well, find the things they mostly agree about, and codify that into an elegant design.
Yes, there will probably be somebody thinking about that. And the question is, is that a human being? It would really suck if it wasn't a human being. It's hard to even imagine if it's not a human being.
Well, one of the jokes I've made (maybe it's not actually that big of a joke) is that in the future there's only going to be one company and it's OpenAI. Or maybe there's two. Anthropic and OpenAI are taking turns.
Yeah, I hope not. But if it is a computer, it's hard to imagine that doesn't just gobble up everything a human can do at that point!
Where do you draw the line on AI involvement in the creative process? You're saying it would really suck if a human wasn't doing it. What are some cases where it actually matters that it's a person and not an AI?
To me, I think there are certain things where fundamentally you value the human connection, even though it could be made synthetically. I mean, you see this with all sorts of consumer goods that you could buy prepackaged, made in a factory, and yet many people seek out the human-made alternative.
I think there will always be worlds in which the fact that it is made by a human, by definition, is what makes something valuable, even if the AI is just as technically proficient. For example, imagine that somebody invented a robot that was really fucking good at playing baseball.
Nobody would let the robot play baseball games because that's not the point. Right? The point is not that you can hit the ball really far. The point is human beings doing human things, as part of a human tradition of coming together.
It feels like that kind of valuing-the-human-touch is becoming a lot more rare nowadays. I've been thinking about how reading books -- actually enjoying novels and stuff -- is becoming its own subculture, when it used to be mainstream. Which makes me wonder if valuing human constructed objects or products is also going to become its own subculture.
I wonder what that looks like for software in particular, where for movies and music, there is already this cultural dialogue where people are like, "it doesn't matter that your AI generated music sounds good. I don't want to listen to it because I value human artists." But people don't really think about software that way as much. Nobody goes, "I want to use this interface because I know it was made by a human being." So it will be interesting to see if that changes, which it totally could.
We choose to value human artists and human writers and human musicians. We don't choose to value human product designers in the same way. We could. That's a possible outcome, and I hope it happens. Maybe there's much more of a world for opinionated software like Arc, where it's not built for everybody but it's built for some opinionated set of people.
Did you work on Dia before you left The Browser Company?
I worked in our exploratory phase that led to it, for a bunch of time.
Was there anything that it taught you about AI or large language models?
There is this idea of context mattering a lot. What was the Kamala Harris phrase? "Everything exists in the context." You know what I'm talking about. You didn't fall out of a coconut tree. You exist in the context. And that was the idea behind Dia is, you exist in the context.
How did you guys learn this lesson? Or was it just a leap of faith?
We wanted to find an intersection between what is newly possible in this AI world, what a browser can do, what makes sense for browsers to do, and what's useful for people. So context is what exists at the center of that Venn diagram. Right? The browser is essentially the OS of your computer these days. Most of the applications run within that platform. And so you have access to all that context. And that seems to be what is important for AI. And so we're going to bet on that.
How about Arc Search, did a lot of people use it? You launched it, right? Did anything surprise you about how people were using it?
The product formula that we used for Arc search ended up working really well and it was very simple. First of all, we were going to build really good browser basics. The browser is this product that has an existing set of things it has to do: It has to let you search, it has to let you open tabs, it has to let you access your previously open tabs. Our goal was to make the basics rock.
We thought about what most people spent most time doing. Number one is probably a new search. Number two is look at a previous page. Number three is look at a previous tab, number four is look at a tab from their computer. And so we stack ranked those and designed the UI around making it as easy as possible to do the thing at the top of the list, and then we had this little cherry on top, which was the "browse for me" AI search engine stuff, to get people excited and bring them in the door. I think that worked really well: to have one flashy feature coupled with this bread and butter that works really, really well. Those two things together are really valuable because you need to give people a reason to care, and you need to give people a reason to stay.
The Arc search feature was the first feature where I thought, "this AI thing is a marquee feature of the product." I think that is a big part of the reason that the direction of the company became more and more AI focused. But it'll be interesting to see how that plays out long term, because as AI gains the ability to do more and more things, there's a real challenge: how do you update people's mental model of what it can do? Right now everyone has this random subset of things they think ChatGPT is good for, and that picture can be hard to shift.
Now you're at Anthropic! What's a day in the life look like? Are you writing Google Docs? Designing rectangles, writing code? What's the sort of proportion you do each of those?
More Google Docs than you'd think. More Slack posts than you'd think. I meant what I said earlier: I think that this is the era of designers who design with words more so than designing with pixels.
For example, this is not my direct responsibility, but we have several people at the company on the design team whose job is content design. Their job is basically to look at concepts which are very alien, and figure out how to make them legible to human beings. They don't draw any pixels, but their work is really important because they are literally thinking about the words we use to describe and the mental models we expect people to put on that will make this stuff work.
It's a lot of just you know, talking to users and getting feedback from folks. There's definitely some Figma, but I would say that most of the Figma is the easy part. I use our design system. It's well set up and so I don't have to do too much there. The work is more about expressing the ideas. Then in terms of prompting, there are a lot of designers who do a lot of prompting. Then another part that is kind of unique to my own practice is a bunch of prototyping, so I have internal apps that I'll send out to people. So it's a lot of that.
What is your title?
I am just a product designer, but that is how I tend to do work. And I did this at Browser Company as well. Y'know, making stuff. I have my own little reference copy of the app that I have just made myself, that works much shittier than the real one, but is very easy to hack on, and has enough functionality that you could use it as your daily driver if you wanted to. I found that to be really valuable and much more doable in the age of AI.
I think any company where you have tons of engineers, there will be reasons that you as a designer may want to have your own copy that is not the same that everybody uses, because necessarily there will be an enormous complexity there. The sweet spot is that every designer is able to use Claude code or Cursor and go in and make prototypes that people can use.
How does working with a primarily AI-based tool, designing a primarily AI-based product, feel different from what you were used to designing, primarily, not-AI stuff in the past?
It's different because I think a lot of the AI stuff starts from a technical thing and a solution, much less so than starts from a user problem. When I was at Browser, you know, all the work was, "what are the user problems we want to solve?" In an AI world, a lot of it is, what is the technical thing we have and how do we put this into a box that solves problems for the user? And sometimes you're retroactively trying to, you take a product and you're like "this is incredible technology." And yeah, you kind of have to go on faith that it will be useful to people. And you have to go, as a designer, find the intersection: what's useful and what people need.
Joel Lewenstein, who is the director of design at Anthropic, has talked about this on Ridd's podcast. He said, "people always talk about, you know, solutions in search of a problem as this bad thing, which is definitely what I came into thinking." And he's like "no, actually it can be good."
Sometimes it is okay to start from a solution as long as you then also meet in the middle with the user problem. So it is much weirder because I'm going from a company that was very design-oriented and the reason the product existed, to a company where it's more like, "AGI is the whole deal," and the products are… I don't want to say secondary, but it is certainly not more important.
Do you think that this is a short term quirk or a second order effect of the AI revolution -- that the tech is advancing faster than product? Or do you think that there's actually something fundamental and long term about this?
It's that the AI itself has agency, and so you are always going to be designing around the technology much more than you would normally. The technology has a seat at the table. It's like a new stakeholder in the game.
When we were working on the first prototypes of Arc Search, I was like, "hey, what if you could have it hallucinate a website?" It was cool, but not useful. And then I was like, "hey, what if you could feed in web context so that it can actually answer factual questions? Not just make stuff up?" Then it was like okay, let's just make this the search functionality. Then later it was, "oh, let's actually put guardrails on this for the output format to actually look nice." All that started from some stupid project. Just, "here's what you can do." So I think that kind of stuff is more and more valuable. Play, overall, I think should be an important part of people's jobs.
I think one thing that is really valuable is being able to live on something and being able to use it and rely on it day-to-day. I mean, maybe this comes from my DNA from Browser Company of dog fooding – the idea that something is not real until you can actually integrate it into your daily workflow. And so the things that I usually try to do are: oh, you see this thing that a model is pretty good at. Maybe you didn't think the model was pretty good before. How do I get this into my life as quickly as possible to see if it is worth trying to put into other people's lives?
When you think about creating AI products, where do you think your time was best spent? Or conversely, the time that you feel in hindsight was totally wasted?
I think a general lesson for product builders is there are a lot of ideas that I've had that I've been really excited to build, and so I've built them the right way… When I could have just done a quick and dirty prototype, what you described with just using ChatGPT and putting in a prompt and getting out an answer. 42 pages is a good example of that. I was like, did I really have to build all of the Figma UI to understand that this is not how people are going to design? Probably not.
There's this running undercurrent in design for AI, you know: you can create a piece of software and then a new model comes out and some subset of what you've created is totally useless because the model can just handle it. But I think anything where you spend time trying to figure out the grain of the model and what the material feels like, and getting in your reps of doing that, is valuable.
Is there a good way and a bad way to develop a feature such that it is more resilient or less resistant to model upgrades over time?
You could build a system where every time the model gets a little better and a little cheaper, we feed in more and more context so it gets better and better and better. But to your point earlier on, maybe in the future UIs will just be generated on demand. You can imagine a world in three years where you open up your maps app, and it's a beautiful illustrated map of exactly where you need to go and it says, "here are six new things to do."
You probably could make this app today! But you'd have to be really careful to build it in a way that's resilient to model upgrades. I think the naive way to do this is: when the user asks for places, search the places list and find the comments, and then think about whether or not based on the context of the chat, and these comments, and the listing and xyz, other metadata… think about whether or not this is a good idea. But I would speculate that, basically the more you can abstract this analysis pipeline, the better. So for example, avoid building too many hardcoded features that are specific to comments, because in the future, models will be better at computer use, and can search up all kinds of things on the web for you, etc.
Yes. But the downside is that the more abstract you get, the fewer guardrails you have for today's models. Right? You can probably imagine in a year, the way this kind of feature is implemented is: you ask the chatbot for something and the chatbot has three tools it can use. It's: search place, search web, and a secret third thing. Then it presents UI made in HTML. That will be the ideal because the model can then apply its own judgment about exactly what it needs to do.
But that probably wouldn't work well today. It's a trade off between - do you want to design the ideal experience of the models today? Or do you want to let the model cook? If you let the model cook, it might do a bad job today.
This is the art of AI Design – deciding when to let the model cook.
Yes, and knowing when to put the model in a baby chair.
What advice would you give to someone just starting out working with AI for the first time?
My advice is just build stuff. Build stupid stuff. Play around. I think that's always good advice with technology and design. My other generic piece of advice for designers is fucking copy people. I think that I learned a lot as a young designer by copying. You don't have to make something original at first.
The month before I joined Anthropic, I wasn't doing this deliberately, but I made my own coding agent that ended up basically being the same thing as Claude Code, but shittier. And I learned an enormous amount from that. I wasn't copying them because I didn't know about it, but it was just an enormous learning that carried over to when I worked there. And I was oh, I know exactly how this stuff works now because I've tried to build it, and so I think now, especially in a time where it's much easier to code stuff…. Code, code. Make stupid stuff. And don't worry about it too much. Don't be too precious.
What's your feeling on chat as a UX paradigm?
I love chat. I mean, people love to be in their little chat apps chatting each other saying: "there's no way that chat is the future!!!" Obviously, when you're doing a very particular task, there are things that are useful to do in different interfaces. But chat is a great baseline. If you can't find an actual problem with the chat interface that you're solving by adding your own interface on top — one that overrides all the incredible flexibility and familiarity you get from chat — then you shouldn't bother.
As you think about your next year of work, what are you looking to do more of or try out in your process?
I definitely want to move more into the engineering side of understanding how the models are built. Right now, as designers, we're very far from the models. Most designers don't have a strong impact on how the models work — their values, how they interact with people.
But I think that's going to matter more and more as models get more powerful. What becomes more important is understanding our actual tools for shaping the models themselves. Prompting is probably still the biggest one, but hopefully there will be others that give us more agency.
I think that one thing AI designers are not doing enough of is participatory design with users. Otherwise designers and users are playing a bit of a game of telephone.
I think you're right. If I had to imagine the design tool of the future, there's some way to play around with prompts, but there's also a highly visual component where you're sketching stuff out.
Most importantly, I think the job of a designer in this world is holding two things in your head at once: what do the human beings need here, and what is the technology good at? And by technology I don't just mean the model — I mean the data we can store, the tools we have access to. If the designer can be the one person who has all of that in their head at once, that's enormously valuable.
Designers have had a ton of agency in the era of something like Arc. Arc is emblematic of this period where the underlying technology was the same for 15 years, the UI was the same for 15 years, and the best you could do was holistically redesign the product from first principles. Now we're in a new era where technology has opened up this Pandora's box — there's a Cambrian explosion of things you can do. And there's a world where that's actually kind of bad for designers, because products don't need to differentiate on design anymore when they're all so different anyway. I don't really know what the way out of that is.
Yeah. In some ways I feel the designer's job is much harder because you have so much more to understand. But in some ways it's also much easier because the bar is so low.
I think "talk to users" should be the number one thing. If you can't talk to users, you're cooked — there's no way you can do this job. Take any problem in the world that can be solved by AI. Send a designer, find 50 qualified research leads, have a half-hour conversation with each of them. After that, if you can put it all on a whiteboard and synthesize — here's what they need, here are the main things they care about — and you write that synthesis into Claude, you're in great shape. There was a period where you could get by just designing. Now the role increasingly requires that design mind and user focus, but you also need to understand the technology.
Hot Takes
I think it'd be really neat if someone made a new desktop operating system. There's never been a better time — the incumbents are worse than ever, most software that matters are cross-platform. Maybe it doesn't literally need to be an OS, it could just be something that goes fullscreen and becomes the main way you use your computer.
The classic — ask the computer to ask you a ton of questions about your prompt before you get started.
Maybe Rollercoaster Tycoon? It's so quotable — "I want to go on something more thrilling than Spiral Slide 1!" They released an iPhone port that is perfectly made. People create the craziest stuff in it. Like, someone made a rollercoaster music video for the song from Wicked. It reminds you what makes us human. AI could never do something so weird.