Josherich's Blog

HOME SHORTS TRANSCRIPT SOFTWARES DRAWING ABOUT RSS

The Agent Network — Dharmesh Shah

28 Mar 2025

The Agent Network — Dharmesh Shah

teams that are made of agents that can perform various tasks. Let’s say you were doing something during the day, and maybe you come up with a good idea, a potential blog post that you want to write. You need a lot of context. You put that into a system and say, oh, can you do some research on this? So that agent can take that and go do the research. It doesn’t need me to expect this immediate feedback. It may take the agent 10 minutes to come back with that, which is fine. So that’s the kind of future state that I’m imagining that I want to get toward as like a hybrid digital team where I expect you as my team member in the same way, like a human might be able to operate where you may not always be there when I want to interact with you, but you’re still around and you can still accomplish tasks.

And that requires this blending of how do I make the request? How do I queue things up so they come back to me at some meaningful point? So it’s non-deterministic on some, but deterministic on others. The other thing that’s been happening is as the reasoning models have been accelerating so quickly, the balance of non-deterministic versus deterministic is shifting. This is where I think we can see the constructs for how the agents should be structured to get some best value. So I wouldn’t suspect to say, okay, here’s how that product feature should work, but rather, can we show ways that agents learn over time? Can we expose some of this information? So once you start to drop the deterministic paths down, then what might start happening is an LLM going through a similar decision-making process to detect which agency it needs to call or services it might need at a given time. And now I want to see how that feedback cycle works. So I’ve never seen the space like this, and I’m not sure how the feedback loop or learning process should manifest itself within agent systems. That’s what I’m trying to work with right now.

Ultimately, with agent.ai, the goal is to create this ecosystem, a new world where agents not only can accomplish tasks but can present themselves, their profiles, and their skills. I want it to be effective for users who need to accomplish various work. That’s the direction we’re ultimately headed.

In closing, I’m very excited about the possibilities for agents in the community tools they can provide, and I think we’re going to see more models and functionalities that enable these team dynamics in the future. If we can collaborate thoughtfully, allowing digital agents and humans to interact efficiently while still keeping the standards in mind, I can see a very innovative future ahead. teams. You would not go to a coworker and say, “I’m going to ask you to do this thing,” and then sit there and wait for them to go do it. That’s not how the world works. So it’s nice to be able to just hand something off to someone. It’s like, “Okay, well maybe I expect a response in an hour or a day or something like that.” There’s some implicit contract that we have with our coworkers in terms of when things need to happen.

So the UI around agents. If you look at the output of agent.ai agents right now, they are the simplest possible manifestation of a UI, right? That says, “Oh, we have inputs of like four different types. We’ve got a dropdown; we’ve got multi-select with all the things like back in HTML, the original HTML 1.0 days.” You’re the smallest possible set of primitives for UI, and it just says, “Okay, because we need to collect some information from the user.” Then we’d go do steps and do things and generate some output in HTML or markup are the two primary examples.

The thing I’ve been asking myself, if I keep going down that path, some people ask me or I get requests all the time. It’s like, “Oh, can you make the UI sort of boring? I need to be able to do this,” right? If I keep pulling on that, it’s like, “Okay, well now I’ve built an entire UI builder thing. Where does this end?”

I think the right answer, and this is what I’m going to be backcoding once I get done here, is around injecting code generation and UI generation into the agent.ai flow. As a builder, you’re like, “Okay, I’m going to describe the thing that I want,” much like you would do in a Vibe coding world. Instead of generating the entire app, it’s going to generate the UI that exists at some point in either that deterministic flow or something like that. It says, “Oh, here’s the thing I’m trying to do. Go generate the UI for me.” I can go through some iterations.

What I think of it as, so it’s like I’m going to generate the code, generate the code, tweak it, go through this kind of prompt style like we do with Vibe coding now. At some point, I’m going to be happy with it, and I’m going to hit save. That’s going to become the action in that particular step. It’s like a caching of the generated code that I can then incur any inference time costs. It’s just the actual code at that point.

I invested in a company called E2B, which does code sandbox, and they powered the LM arena web arena. It’s basically just like you do LM sys, like text to text; they do the same for UI generation. If you’re asking a model how to do it, but yeah, I think that’s kind of where I’m really fascinated.

The early LLMs, you know, were understandably laughably bad at simple arithmetic, right? That’s something my wife and the normies would ask us. They’d be like, “You call this AI? It can’t.” My son would be like, “It’s just stupid. It can’t even do simple arithmetic.” Over time, it’s been discovered that there’s a reason for this, right? The word “language” is in there for a reason in terms of what it’s been trained on. It’s not meant to do math. But now it’s like, “Okay, well, the fact that it has access to a Python interpreter that I can actually call at runtime, that solves an entire body of problems that it wasn’t trained to do.” It’s basically a form of delegation.

So the thought that’s kind of rattling around in my head is, that’s great. So it took the arithmetic problem first. Now, like anything that’s solvable through a relatively concrete Python program, it’s able to do a bunch of things that I couldn’t do before. Can we get to the same place with UI? I don’t know what the future of UI looks like in an agentic AI world, but maybe let the LLM handle it, but not in the classic sense. Maybe it generates it on the fly, or maybe we go through some iterations and hit cache or something like that, so it’s a little bit more predictable. I don’t know.

Especially when is the human supposed to intervene? If you’re composing them, most of them should not have a UI because then they’re just web hooking to somewhere else. I just want to touch back. I don’t know if you have more comments on this. I was just going to ask when you said you’re going to go back to code, what are you coding with? What’s your stack?

So Python’s my language. I’m glad that it won in terms of the AI language. It’s the lingua franca. It’s the second-best language for everything. By the way, I think exactly the end of one of the things that I disagree with Brett Taylor on when he was on, and just generally—I’m a massive Brett Taylor fan. Smart, one of my favorite people in tech. It was like a segment where he was talking about, “Oh, we need a different language than Python” or whatever that is built for AI. It’s like, “No, Brett, I don’t think we do, actually.” It’s just fine. It deals with just fine. It’s just expressive enough.

It’s nice to have a language that we can use as a common denominator across both humans and AI. It doesn’t slow the AI down enough, but it does make it awfully useful for us to also be able to participate in that kind of future world where we can still be somewhat useful. Anyway, but yeah, so it’s Python cursor as my kind of code gen thing.

I would also mention that I really like your code generation thing. I have another thesis I haven’t written up yet about how generative UI has kind of not fulfilled its full potential. We’ve seen the bolts and lovables, and those are great. Vercel has a version of generative UI that is basically function calling pre-made components. There’s something in between where you should be able to generate the UI that you want and pin it and stick to it, and that becomes your form.

The way I put it is, you know, I think the two form factors of agents that I’ve seen a lot of product market fit recently have been deep research and the AI builders like the bolt lovables. I think there’s some version of this where you generate the UI, but you sort of generate the mad libs and fill in the blanks forms, and then you keep that stable while the deep research just fills that in.

I love those kind of simple mutations and kind of abstractions. But if you look at the kind of, I’ll say almost like the polar opposite of that, so right now most of the UIs that you and I think about or conceive, or even examples, are based on the primitives and the vocabulary that we have for UI right now. We have text boxes, we have check boxes, we have radio buttons, we have pulldowns, we have nav, we have clicks, touches, swipes, voice—whatever it is. The set of primitives that exist right now, we will combine them in interesting ways.

Where AI is going to be headed, I think, on the UI front is the same place it’s headed on the science front. Originally, it’s like, “Oh, based on the things that we know right now, it’ll sort of combine them.” But we’re right at the cusp of it being able to actual novel research. Maybe a future version of AI comes up with a new set of primitives that actually work better for human-computer interaction than things that we’ve done in the past. I don’t think it ended with the checkbox, video button, and drop-down list. I think there’s life beyond that.

I know we’re going to move to business models after, but when you talked about ivory teams, one way we talked to folks about it is like you had offshoring, you had onshoring, which is like moving to a cheaper place in the country than offshoring, you know, it’s like AI shoring.

You’re kind of moving some roles to AI. That’s the thing people say, AI shoring. That’s the first I’ve ever heard of that. But to me, the most interesting thing about the professional networks is that with people, you have limited availability to evaluate a person. So you have to use previous signals as an evaluation thing. With agents, theoretically, you can have proof of work. You can run simulations and evaluate them in that way.

How do you think about that when running and building agent.ai? Instead of just choosing one, I could literally just run across all of them and figure out which one is going to work best. I’m a big believer, so under the covers when you’re building, because the primitives are so simple, you have some set of inputs. We know what the variables are.

Every agent that’s on agent.ai automatically has a REST API that’s callable in exactly the way you would expect and automatically shows up in the MCP server. You’re able to invoke it in whatever form you decide to. My expectation is that in this future state, whether it’s a human hiring an agent to do a particular task or evaluating a set of five agents to do a particular task and picking the best one for their particular use case, we should be able to automate that. It’s like, “I just want to try it.”

There should be a policy that the publisher and builder of the agent has that says, “Okay, well, I’m going to let you call me 50 times, 100 times before you have to pay,” or something like that. We should have effectively an audit trail, like, “Okay, this agent has been called this many times.” We also have kind of human ratings and reviews right now. We have tens of thousands of reviews of the existing agents on agent.ai, and the average is like 4.1 out of five stars. All those things are nice signals to be able to have.

But the callable, verifiable kind of thing, I think is super useful. If I can just call it and give me an API that says here are five agents and it solves this particular problem for me. If I have a simple eval, I think that would be so powerful. I wish I had that for humans. Honestly, that would be so cool. Because I mean, when I was running engineering teams, people would try and come up with these rubrics, you know, when hiring, and it’s not really helpful.

You just kind of need some ground truth. I feel like now, say you want to hire an AI software engineer, you can literally generate like 15, 20 examples of your actual issues in your organization, both from a people’s perspective of collaboration and actual code generation, and just pay for it to run it. Today, we take home projects and we pay people.

This should be kind of the same thing. It’s like, I’ll just run you. But I feel like people are not investing in their own evals as much internally. That’s the present company included, right? Everyone talks about evals; everyone accepts the fact that we should be doing more with evals. I won’t say nobody, but almost nobody actually does.

It’s a topic for a whole other day. It’s funny because obviously HubSpot is famous for launching graders of things. Yes, you’d be perfect for it. I agreed on evals, by the way. I mean, I just force myself to be the human in the loop or someone I work with, and that’s okay. But obviously, the scalable thing needs to be done.

Just a fun fact on or a question on AI, agent AI. You famously, you’ve already talked about the chat.com acquisition and all that. That was around the time of custom GPTs and the GPT store launching. Yes, and I definitely feel agent AI is kind of the GPT score, but not taken seriously. Do you feel open AI, if they woke up one day and said, “Agent AI is the thing. We should just reinvest in GPT store,” is that a fear?

I think that’s not agent.ai driven. It’s an inevitability that OpenAI will do that. I don’t have any insider information; I’m an investor, but no insider information is because it makes too much sense for them not to. They’ve taken multiple passes at it, right? They did plugins back in the day, then custom GPTs, and then the GPT store because being the platform that they are, I think it’s inevitable that they will ultimately come up with—and they already have customs.

You know, it’s going to happen. One of the things I promised myself I would never do is compete with Sam Altman, ever. I’m not intentionally anyway. But here I am. But I’m not really, right? Not really. It’s free. So whatever, but at some point.

But I mean, he’s actually valuable. They’re solving a much, much bigger problem. I’m like a small, tiny rounding error in the universe. The reason that compelled me to actually create in the first place is that I knew custom GPTs existed. I did have this rule in my head that said don’t compete with Sam; he’s literally at the top of my list of people not to compete with. He’s so good.

The thing I needed in terms of my own personal use, which is how agent.ai got started, was because I was building a bunch of what I call solo software, things for my own personal productivity gain. I found myself doing more and more LM-driven stuff because it was better that way. I said, “You know, AI sort of showed up in those solo projects a bunch.”

The thing I needed was an underlying framework to build these things. High on the list was I want to be able to straddle models because certain steps in the thing are like, “Oh, for this particular thing involves writing.” So maybe I want to use Claude. For this particular thing, maybe I want to do this—even around image generation, different types of whether it has text or doesn’t have text or whatever.

I want to be able to mix and match. My sense is that whether it’s OpenAI or Anthropic or whatever, they’re likely going to have an affinity for their own models, right? Which makes sense for them. But I can sort of be, for my own purposes and for our user base, a little bit of Switzerland; it’s like we don’t think there’s one model to rule them all. Based on your use case, you’re going to want to mix and match and maybe even change them out.

Maybe even test them back to the eval ideas. I have this agentic workflow and here’s the thing that we’ve been playing with recently because we have enough users now where they like the LM. When I look at the bills, it’s like, “Oh, I’m spending real money now.”

And this is just human nature, right? It’s not just normies, but it’s like you have this dropdown of all the models that you can say, “Which model do you want to use in your agent.ai agent?” As it turns out, people pick the largest number. They will pick 4.5 or whatever it is, right?

Oh my God, you’re doing 4.5? Yes, ouch. Yes. But the thing I’ve promised myself is we will support all of them regardless of what it costs. Once again, I see this as just a research thing benefit to humanity. Inference costs are going down, or at least I tell myself late at night so I can sleep.

They pick the highest numbered one. We have an option in there right now that says— and which is the first option—”let the system pick for me.” Auto-optimist. As it turns out, people don’t do that. They just pick the highest because they don’t trust it yet, which is fine; they shouldn’t trust it completely.

One thing we discovered is that if we back channel it, and this is the thing we’re testing, oh, if I can just run the same agent, get it run a thousand times, we’ll do it on our own internal agents first. If the ratings and reviews—because we’re getting human evals all the time on these agents, we can get a dramatic multiple orders of magnitude reduction by going to a lower model with literally like no change in the quality of the output, right?

That makes sense because so many of the things we’re doing doesn’t require the most powerful model. It’s actually bad because there’s higher latency. It’s not just a cost thing. In that kind of future state, I think we’re going to have model routing and a whole body of people working on that problem too; it’s like, “Help me pick the best model at runtime.” Would you buy or build model routing?

I buy everything that I can buy. I don’t want to build anything I don’t have to. One of the most impressive examples of this, I think, was our Chai AI conversation, which I think about a lot. He views himself explicitly as a marketplace. You are kind of a marketplace, but he has a third angle, which is the model providers and he lets them compete.

I think that sort of tri-three-way marketplace makes a lot of sense. I don’t know why every AI company isn’t built that way. It’s a good point, actually. I’m on a list of things I’m super passionate about. I’m very passionate about efficient markets or extremely irritated by inefficient markets. Efficient markets, for the normies listening, are markets that exist where every possible transaction that should occur actually does. That’s an efficient market.

Why do inefficient markets exist? Maybe the buyer and seller don’t know about each other. Maybe there’s not enough of a trust mechanism. There’s no way to actually price it or come up with fair market value, fair pricing. As you kind of knock those dominoes down, the market becomes more and more efficient. Lots of latent value exists as a result of inefficiency, and whoever removes those inefficiencies for high-value markets makes a lot of money.

This is one of those examples—there’s an inefficiency right now because we are either using overmodels or whatever. Let’s reduce that to an efficient market. The right model should be mashed up with the right use case for the right price. Very interesting. Have you looked into DSPy? I have looked at it, not deeply enough though.

It’s supposed to be, as far as I understand, the only evals-first framework. Evals are so important. By the way, the relationship between this and all that is DSPy would also help you optimize your models. Yep. Because you did the evals-first. I wonder why it’s not as popular, but I mean, it is growing in traction, I would say. We’re keeping an eye on it.

Let’s talk about business models. Obviously, you have kind of two: work as a service and results as a service. I’m curious how you divide the two. So work as a service is— we know about software as a service, right? I’m licensing software that’s delivered to me as a service that’s been around for decades. We understand that. But the consumer of that service is generally a human doing the actual work, whatever software you’re buying.

Work as a service is the software is actually doing the work, whatever that work happens to be. If I come up with discrete use cases, whether it’s classification or a legal contract review or whatever, the software is actually doing the thing. Results as a service is you’re actually charging for the outcome, not actually the work, right? That says, “Okay, instead of saying, I’m going to pay you X amount of dollars to review a legal contract or this amount of time or number of uses or something like that, I’m going to actually pay you for the actual result.”

My take on this in the industry or parts of the industry are super excited about this kind of results-to-service or outcomes-based pricing. I think the reason for that, and I think we’re over-indexing on it, is the most popular use case on the agent side right now is customer support. Well-documented.

A lot of the providers that have agents for customer support do it on a number of tickets resolved times X dollars per ticket. The reason that makes a lot of sense is that the customer support departments and teams sort of already have a sense for what a ticket costs to resolve through their current way. You can come up with an approximation for A, what the kind of economic value is.

There’s also at least a semi-objective measure for what an acceptable resolution or outcome is, right? You can say, “Oh, well, we measured the net promoter score or CSAT for tickets or whatever.” As long as the customers say 90% of the tickets were handled in a way the customer was happy with, that’s whatever your kind of line is. As long as the AI is able to replicate that same SLA, there’s a ground for comparison.

I think the reason we’re over-indexed, though, is that there are not that many use cases that have those two dimensions to them that are objectively measurable and that there’s a known economic value that’s constant. Customer support tickets, because they’re handled by humans, makes sense. Humans have a discrete cost, especially in retail, where this originally got started in B2C companies that have a high volume of customer support tickets distributed across. A ticket is roughly worth the same because it takes the same amount of time for most humans to do that kind of level one, tier one support.

In other things, the value per outcome can vary dramatically, literally by orders of magnitude in terms of what the thing is actually worth. That’s kind of thing number one. Thing number two is how do you objectively measure? Let’s say you’re going to do a logo creator as a service based on results. That’s a completely opposite subjective thing or whatever.

It may take me 100 iterations; it may take me five iterations. The quality of the output is actually not entirely under my control. It’s not up to the software. You could have weird taste or maybe you didn’t describe what you’re looking for well enough. It was just not a solvable problem, and design, kind of qualitative and subjective disciplines, deal with this all the time. How do you make for a happy customer?

There’s a reason why they have, “Oh, we’ll go through five iterations.” But our output is we’re going to charge you $5,000 or $500 or whatever it is for this logo. But that’s hard to do at scale. Just a relatable anecdote: we have a podcast. We just got a new logo, and we did a 99 designs for it. So many designers were working really hard, but I just didn’t know what I wanted.

They were like, “Just too bad!” Like, I know you seem great, but you know. And that’s another example of a market made efficient. I’ve been a 99 designs user and customer for a dozen plus years now. It’s fantastic. So many designers—this doesn’t cost them that much for them to do, but it’s worth a lot to us because we can’t design for anything.

By the way, pro tip on 99 designs is that on the margin, you’re better off kind of committing to paying the designer that you’re going to pick a winner, whether you like it or not. It gets higher participation, and you’re still going to get a bunch of crap. You get a bunch of noise in there, but the kind of quality outcome is often a function of the number of iterations.

Logo design is one of those examples. If you can, if you had to choose between 200 logos versus 20 logos, chances are you’re closer to finding something you like. For those interested, I have a blog post on my reflections on the 99 design thing. They give an estimate of how many designs you get. I think the modifier for like, we will pay you—maybe it’s you—is like 30 to 60, but actually, it’s 200. So it’s underpriced.

Do you think some markets are just fundamentally going to move to more results-driven business models? Probably. I don’t understand enough markets well enough, but if we had to sort order rank them, there’s likely some dimension along which we could sort that.

Is there an objective measure of truth or the outcome? Is there a way to price it in terms of the low variance or variability on the value per outcome? If those things are true, whatever industries that is true in customer support is an example, but there are likely lots of other examples where those two things are true.

The thing I wonder, though, is that from the customer’s perspective, would they rather actually pay for work as a service versus an actual? Maybe the way they think about it is that’s sort of my arbitrage opportunity. I can get work done for X, but the value is actually Y. Why would I want that Delta to be squozed out by the kind of provider of the software if I have a choice?

Oh, I mean, okay. Attribution. There are 18 things that go into that, and you’re one of them. It’s hard to tell. By the way, have you seen, obviously, you’re in this industry—not exactly HubSpot’s exact part of the market—but what have you seen in attribution that is interesting? Because that directly ties into work as a service versus results.

Not enough because we are so behind as a world, as an industry, just pick your thing. This is why I think Web3, in the way that it was meant to be done, is going to make a comeback because fundamental principles of that make sense. I think what happened in that world was a bunch of crypto bros and grifters and NFT stuff or whatever that was loosely related.

There was no actual, but the idea of a blockchain, of a trackable thing, of being able to fractionalize digital assets, attribution, having an audit log, a published thing that’s verifiable—all those primitives make sense, right? Maybe there’s a limited but not zero set of use cases where what we would now call the inference cost or the overhead, the tax for storing data on the blockchain has a, and there’s certainly a tax to it. It doesn’t make sense for all things, but it makes sense for some things for sure.

We just don’t have attribution in any meaningful way. Isn’t it sad that it’s so important? And I know. No answer. It partly comes down to incentives. The people that actually have the data or parts of the data from which attribution could be calculated or derived don’t really have the incentives to make that data available.

Even something as simple as on the PPC side, right? On the Google search thing, that’s sort of my world or has been. We have less data now than we did back in the day in terms of click-throughs and things like that before Google would actually send you here are the keywords people typed. They even took that away.

It’s hard to connect the dots back on things. We’re seeing that across—not just PPC, but all sorts of things. They took that away from search console. What’s that? Their search console has that. Yes, they took that away. Search console has that, but your website, if you go to Google Analytics, you can connect it back to the Google Search Console.

Okay, I see. Well, it’s a known thing. You don’t have to make it a rant about Google. What about software engineering? Do you think it will stay as like a work as a service? Or do you think I think most companies hire a lot of engineers, but they don’t really know what to do with them or like they don’t really use them productively.

I think I’m actually bullish on engineers in terms of their kind of long-term economic value—not despite all the movements in co-gen and all the things that we’re already seeing, but because of it. What’s going to happen as a result of AI, and people have talked about this in even other disciplines, is we’re going to be able to solve many more problems.

So my math guy in me is like, “Okay, so we always say, ‘Oh, now agents are going to be doing code or whatever, and there’s going to be a million software engineers, you know, virtual digital software engineers out there.’” The value per engineer is going to go down because I’m just in that same mix that I as an engineer. What they don’t recognize is that it’s not just about the denominator, there’s a numerator as well, which is what’s the total economic value that’s possible.

I would argue that’s growing faster than the kind of denominator is, that the actual economic value that’s possible as a result of software and what engineers can produce with the tools they will have at hand. I think the value of an engineer actually goes up. They’re going to have the power tools that are going to be able to solve a larger base of problems that are going to need to be solved.

It feels to me like it’ll stay as work as a service. You’re paying for work. I don’t think there’s a way to do it. There will be a set of engineers that, and we see this all the time, you know, in the media industry, you have people that are kind of writers.

But then you have freelancers that write articles or write however they manifest their kind of creative talent. Both make sense, right? There’s the work for hire. There’s also the kind of outcome-based or like I produce this thing. Some of those engineers even produce agents, so they put it in a marketplace like Agent Did AI someday, and that’s how they make their millions.

Any other thoughts just on agents? We covered a lot of territory, so I’m excited about agents. My kind of message to the world would be don’t be scared. I know it’s scary. Easy for me to say as a techno-optimist, but learn it. Even if you’re a normie, even if you’re not an engineer, if you’re not an AI person, you’ll think of yourself as an AI person. Use the tools.

I don’t care what role you have right now, where you are in the workforce. It will be useful to you, and start to get to know agents. Use them, build them. I think my message for engineers is always like, there’s more to go. We’re still in the early days of figuring out what an agent’s stack looks like.

I want to push people toward agents with memory.

Alright, agents with planning. We have to talk about memory. We got to talk about memory. Let’s go. Yeah, let’s do it. Because I think that’s the next— in my mind, the next frontier is actual long-term memory, both for agents and then for agentic networks in a trustable, verifiable—I won’t say privacy first, but a privacy-oriented way.

I have an issue with the term privacy-first because a lot of times, we say privacy first when we don’t really mean that. Privacy first means I value that above all things. It doesn’t matter what we’re talking about, and that’s just not true enough for any human.

Anything that wants to be used. Memory is an interesting thing. The thing I’m working on right now, lots of things in play in agent.ai, is around the implementation of memory. There are three projects out there, Mem0 being one of them.

But the thing that’s interesting for me, and we see this in ChatGPT and other things right now, where it does have the notion of a longer-term memory it can pull back into context as needed. The thing I’m fascinated by is cross-agent memory. If I’m an agent builder right now, it’s like, “Okay, here are the things that I sort of know or I’ve learned from the user in terms of pulling out the—I’ll call them knowledge nuggets for lack of a better term.”

That’s great. But then when the next agent builder comes out, and it’s the same user, shouldn’t all the things that agent one learned about me, if it’s going to be useful for agent two, as long as I opt into it, it’s like, “Yeah, I don’t care.” In fact, I would find it awfully annoying to tell agent two and agent N and agent N plus one all the same things I’ve already told.

It should know, like the system should know. This is part of the reason why I’m a believer in these kind of networks of agents and shared state—it’s that user utility gets created as a result of having shared memory.

Not just that we should solve the memory problem for an independent agent, but then we should also be able to share that context, share that memory across agents. That’s part of the value prop for agent.ai. It’s like, “Okay, when you’re building, we’ve got, you know, whatever million users, and we’re going to have growing memory about all of them.”

So instead of you going off on your own thing and building an agent out as this disconnected node in the universe, here’s the value for building on the network or on the platform, ours or someone else’s, because more user value gets created. It’s more utility.

How do you think about auth for that? Because part of memory is like selective memory. Take scheduling, for example. If I have a scheduling agent, you should be able to access the events you’re a part of and like what times I have available, but it shouldn’t tell you about other events on my calendar.

What’s that layer like? I have so many thoughts on this. This is the opportunity out there—solving these kind of fundamental problems—the kind of things that are going to need to exist. Right now, the closest approximation we have is auth, OAuth 2.0, right? Everyone has to okay and it’s a very, very coarse set of scopes. It’s like based on the provider of the OAuth server, be it Google, whoever it is, HubSpot, it doesn’t matter.

You pick a set of scopes, and they could have defined the scopes to be super grand and fine, but it’s sort of up to them, and that is going to move so slowly. For instance, the use case I have right now, like I use email for everything. I use it as an event and data bus for my life, right? I mean this literally. Anything that I do, if there’s a way to kind of get that into email, because I know it’s an open protocol, I can get to that data in useful ways.

This is before, so I have 3 million that I’ve built a vector store off of that solve my own personal use cases. So I’ll give you the example, but obviously I’m not going to build my all my own software. For everything.

But if a startup comes along and says, “Darmesh, can you make your email inbox available in exchange for these things?” I’m like, hell no. That’s literally my crowd; everything, like my life is in here, right?

So you need to share subsets. Yes. And so I think there’s a, and maybe this is not the actual implementation, but imagine if someone said, okay, I have a trusted intermediary for that first trust, however defined, that says, okay, I’m going to OAuth into this thing, and it gets to control. I can say in natural language, I only want to pass email to this provider where the label is one of X or that’s within the last thing and no more than 50 emails in a day or whatever. So I don’t have them dumping the entire 3 million, you know, backlog, whatever controls I want to put on it. It’s unlikely that all the OAuth server-side right now, the Googles, even the big ones, small ones, doesn’t really matter, are going to do that. But this is an opportunity for someone, and they’re going to need to get to some scale, build some level of trust that says, okay, I’m going to hand over the keys to this intermediary. But then it opens up a bunch of utility because it gives me control, more fine, fine-grain control.

Yeah. I’d say Langchain has an interesting one. There are a bunch of people who have tried to track crack AI email. Every single one of them who has tried has pivoted away. Yep. And I’m waiting for superhuman to do it. Yep. I don’t know why they haven’t, but, you know, at some point. That’s some cool AI stuff.

Yeah. I think the pace needs to increase. But I think this goes back to like Open Graph. Yeah. Right. Which is like, I think Google is not incentivized to build better scopes. Nope. And like, they’re just not going to do it. Nope.

So, we can’t even get like, we haven’t been able to get semantic search out of Google for like, still. No, totally. You know, just now they made the announcement this week.

What do you mean? Semantic search? In Gmail? Oh, I see. So, okay. So they have all my 3 million emails. Why don’t they have a vector store where I can, just like basic rag, right? Actually, that’s really bad. They’re indexing the entire internet in real time. Like, I don’t think my email is that big a deal, but.

Yeah. My standard thing on memory is, it sounds like you are using an M0. I am. There’s also Mgpt, now Leta, which gives a workshop at my conference. There’s Zep, which uses a graph database. It’s just kind of open source, kind of interesting. And LangMem from LangGraph, which I would highlight.

Also, it’s really interesting, this developing philosophy that people seem to be agreeing on a hierarchy of memories, from semantic memory to episodic memory to, I think, just overall sort of background processing. Like, we have independently reinvented that AI should sleep to do that deep REM processing of memories. It’s kind of interesting.

Yep. Yeah, that is. I mean, just on the notion of memory and hierarchies. So, you know, I talked about the memory we’re working on right now is at the user level and it’s cross-agent, right? But the other kind of one step up would be, so once again, going back to this kind of hybrid digital teams, is that you can imagine to say, oh, well, my team has this kind of shared team. I don’t want to share with the world or anything in the other way. This set of agents across this group of people, I want to have shared state like we would have in a Slack channel or something like that.

And that should sort of exist as an option, right? And the platforms should provide that. And the B folks I should also mention have mentioned that they’re working on that as well. So imagine being able to share, you know, selective conversations with people. Like, that’s nice. Limitless has, I guess, voice-based shielding.

Yeah. I don’t think they have action. I’m an investor in that too, by the way. Oh, really? Yeah. So full, okay. I’m trying to think about all the things I’ve said. Invest in OpenAI, perplexity, lane graphs, crew AI, Limitless, a bunch of them.

So if I’ve said anything, by the way, I have no insider knowledge. I’m not trying to plug or pitch or anything like that. No, no, no, no. I think it’s understood. We’re often like, you know, if you have skin in the game, you probably invested or, you know, may or may not. I’m not an investor in B, but I’m just a friend. And I think you should be able to speak freely of your opinions regardless.

Okay. We have some miscellaneous questions that may be zooming out from agent AI. Sure. First of all, you mentioned this, and I have to ask, you have, you know, so many AI projects you’ll never get to. Yep. What’s one or two that you want other people to work on?

Oh, wow. Drop some from your list. I want other people to work on. Because you’ll never get to it. Yeah. What I need to do is I’ve had this thought before. So I have this is like maybe like pick one a week or something like that and give the domain away. Like I have people submit their one-pager or something like that. It’s like, if you can convince me that you have at least enough of an idea, enough like willingness to kind of commit to actually doing something.

It’s the ones that you keep mentioning, but you haven’t gotten to it for whatever reason. Yep. Yep. Traffic. Like some of them, I don’t have the underlying business model. We’re going to have to come back to this. Maybe do a follow-up episode. I don’t like, they’re just not jumping to mine.

You don’t need the business model. Just, just. Yeah. So I own scout.ai. Okay. I think that’s an interesting. By the way, pretty much all of them, there was an idea at the time. It’s like, it was one of those late-night. It’s like, oh, I could do this. Is the domain available? And I’ll go grab it. I’m trying to think what else I have on the AI space.

I have a lot of like nonprofit domain names as well for like a nonprofit like Open Graph. I’m not sure why things are not jumping to my head. I have agent.com, which obviously is tied to agent.ai. That’s going to be big. That’s going to be big. Oh my God. That’s going to be like a 30, $50 million. It’s going to be big. Yeah. It’s going to be, I think, end up being bigger than chat.com, which was 15.

Yeah. Yeah. It’s more work-oriented. Yep. That’s interesting. Yeah. Do you want to talk about the chat.com thing? I would love just the backstories. Like, did you just call up Sam one day and be like, I got the domain? Did they kind of get back to you knowing that you had it?

It’s a good story. Back in the original chat GPT days, the first thought I had in my head, which lots of people had in their head is that OpenAI is going to build a platform, and chat GPT is actually just a demo app to show off the thing. And there’s been precedents for tech companies that have had, you know, demo apps to kind of help normies understand the underlying technology.

And even after the kind of the boost or whatever. So my original thought was, well, someone should actually create like an actual real product. And so I’m like, and that product should be called chat.com because GPT is not a consumer-friendly thing at all. Like that’s an acronym. I’m not pretty, it doesn’t roll off the tongue. And so like, I’ll build chat GPT because that was just a demo app back then. And so I, you know, got chat.com.

And then as it turns out, chat GPT is like a real product. And I was at an event here in San Francisco that Sam spoke at where he launched plugins. I think it was the announcement at that time. Yeah. And that’s the thing is like, I had sort of suspected it’s like, okay, things sort of be like, there’s no way that OpenAI is going to launch plugins for chat GPT if they were not thinking of it as an actual platform.

It’s not just about the GPT APIs. This is like a real thing. I’m like, crap, like this violates the first rule of Darmesh, which is don’t compete with Sam. I knew when I bought the domain that there was competition for the domain. There were other companies looking to buy it. I don’t know who they were. I had suspicions.

So I bought it and then I’m like, okay, well, I’ll reach out to Sam. I was like, hey, Sam, I happen to have got, I don’t know if he was or wasn’t in the running or trying to acquire it or not, but I have chat.com. I’m not looking to make a profit over it. If you want it, you will obviously do something much better, bigger with it. I don’t want to be in the compete with Sam game effectively is what I said.

And so they did want it, and yeah, we struck a deal. Looks like it’s been a very good deal if the valuations are, you know, to be real. Yeah. Who knows? Who knows? It’s one of those weird things like, yeah. The agent.ai domain evaluator said that late in that space is for between five and 15K.

Okay. So does that feel right? Well, it’s missed. It’s missing. So this is V1 of it. This one does not incorporate the transactional data. I have not published that one yet. That’s because it’s also operationally very intensive, that other one. We actually had it donated by a listener.

Okay. So I don’t know what the real cost is, but it’s missing that it’s linked to an influencer. By the way, I’m also from crew.ai, which I’ve offered. I’m an investor. You did? Yes. I bought that. And I’ve told him that like, whenever you’re ready, you let me know, I’ll sell it to you at cost.

Yeah. I mean, that is some value add. Since you buy a lot of domains, what are your favorite domain buying tips apart from have a really good domain broker, which I assume you have? No, I actually don’t. You don’t? I do my own deals.

Oh, nice. I have a, like a very cards-face-up approach to life. So there’s, you know, some people would tell you, it’s like, oh, well, if someone, they know that it’s you’re behind the transaction, you know, the price is going to go up. Sure. But it’s still like willing seller or willing buyer or whatever. It doesn’t mean I’m going to have to necessarily pay that price.

So it’s like, okay, but the upside to it, because I always, you know, reach out as myself when I’m, when there’s a domain out there. And they can look you up. They can look me up. But then I also come off as like legit, like okay, well, there’s very few people who are not going to return my email when I say I’m interested in a domain that they may have for sale or had not considered selling. But, you know, would you consider selling? So, yeah.

And some of the, like, so I own some of my favorites. I still own prompt.com, by the way, that could be a big one. And I owned, and this is one, I don’t regret it. I went into a good, I owned a playground.com. And so the original idea behind playground.com was at the time, OpenAI had their playground where you can play around with the models and things like that, right?

It’s like, okay, well, there should be a platform-neutral thing. There should be a playground across all the LLMs. Then you can, and there are obviously products and startups that do that now. And so that was my original thing. It’s like, oh, there should be playground.com and you can go test out all the models and play around with them just like you can with, with OpenAI’s GPT stuff.

And then, so, Seale was out there with, with, with Playground, the company. And I think he reached out, he might have reached out to me over Twitter or something like that. So we knew of each other. I’d never, I’ve still never met him. And he asked me whether I would consider, and that was a tough one because I’m like, I actually have the business idea already in my head. I think it’s a great domain name.

And it’s like a really simple English word that has like relevance and a whole new context now. But once again, I took equity. So it’s like, I look on the bright side. That’s like, I, so domains that get me into deals that I would never have been able to likely get into other ways.

So, yeah. We should securitize your GoDaddy account and just make it a fund. It’s basically a fund. Yeah. By the way, so back to the kind of three things or whatever, Simon invested, I don’t know if it’s public yet, but in a company that’s going to treat domains as a fractionalizable tradable asset, because that’s the kind of the original NFT in a way, right?

It’s like, okay, well, if you can, and then if you can make both fractionalizing, but also just to transfer, like right now, it’s so painful when you buy a domain, you go through an escrow service and there’s just all this. It’s like, I just want, like instantaneous, like charge me in Bitcoin or credit card, whatever it is. And then I should show up and I should be able to reroute the DNS.

Like that should be minutes, not weeks or days. Anyway, so. Yeah. That’s what ENS on Ethereum is basically the same. But it needs to be that, but for normies. Yeah, exactly. They should bring it.

Yeah. The ICANN and all of that is its own thing. I have a question on just that, you know, you keep bringing up your Sam Altman rule. One of my favorite, favorite, favorite, my first millions of all time was actually without you there, but talking about you.

Okay. Because Sean was describing you as a fierce nerd, which I’m sure you were there. I think Sam also is a fierce nerd. And he is. I was listening to this Jessica Livingston podcast where she had him on and described him as a formidable person. I think you’re also very formidable, and I just wonder what makes you formidable.

What makes you a fierce nerd? What keeps you this driven? Yeah. Sam’s fiercer and nerdier just for the record. But I think part of it is just like the strength of my conviction, I guess. Like I’m willing to like work harder and grind it out more than people that are smarter than me.

And I’m only slightly stupider than people that are willing to work harder than me. Right. Like I’m just the right mix of the kind of grind it, kind of work at it, stick to it for extended periods of time. If I think I’m right, I will latch out, latch on and not let go until I can either like prove to myself that it’s not.

So even like the natural language thing, it’s like, you know, it took 20 years, but eventually I got to a point where the world caught up and it became possible. But yeah, I think, and part of it is, I think this is partly, I think what makes me like, I’m a nice guy and sometimes they’re the most dangerous kind, right?

It’s like, okay, well, I don’t make enemies or whatever. But so my advice would be, this is my take on competition. I don’t think of it as like war. I think of it as their opponents, right? And this is not worried. It’s like, it’s a game, right? And you can use whatever analogy. I happen to play a fair amount of chess. I’m a student of the game.

That’s partly, I think what makes me effective. I’m solving for the long term. So I’m kind of hard to deter. So for those of you out there looking to kind of compete with HubSpot, no, I’m going to be here for another 18 years. So, but not that you shouldn’t do it. It’s a big market. I’m not trying to sway anyone, but yeah.

I think like something I struggled with, with this conviction. You said you pursue things with conviction, but like you start out not knowing anything. Yeah. And so how do you develop a conviction when there’s, you find it along the way where you stumble along the way, then you lose conviction and then you stop working on it.

You know, like how do you keep going? The way I’ve sort of approached it is that, so I don’t generally tend to have conviction around a solution or a product. I have conviction around a problem that says, this is an actual real problem that needs to be solved. And I may have an idea for how to be solved, you know, right now.

And that I may get dissuaded. It’s like, ah, I’m not smart enough. Technology is not good enough, whatever the constraints are, but it’s the problem I have conviction around. It’s like, oh, that problem still hasn’t gone away. So like, I sort of filed away in the back of my brain and I’ll revisit.

It’s like, okay, well, you know, the kind of board changes and it changes really fast now with AI, like things that weren’t possible before are now possible. So you kind of go back to your roster of things that you believe or believed and say, maybe now, now is the time. Maybe then it wasn’t the time.

But I’m a big believer in kind of attaching yourself passionately with conviction to problems that matter. And there are some that are just too highfalutin for me that I’m not going to ever be able to kind of take on. I have the humility to recognize that.

Yeah. I feel like I need an updated founder’s version of a serenity prayer. Like give me the confidence to like do what I think I’m capable of, but like not to overestimate myself, you know?

Yeah. You know, anyway, when you say board changes, how do you keep up on AI? A lot of YouTube, as it turns out. Really? Yeah, a lot.

Okay. Fireship? I don’t know what Fireship is. It’s a current meme right now. Whenever OpenAI drops something, you know, they love this like live streams of stuff from the OpenAI channel. The top comment is always, I will wait for the Fireship video.

Okay. Because Fireship just summarizes their thing in five minutes. No, so my kind of MO, so I, by the way, I keep very weird hours. So my average go-to bedtime is roughly 2am. Oh boy. But I do get average seven, seven and a half hours in. That’s great.

I don’t use alarm clocks because I don’t have meetings in the morning at all, or try not to at least. So my late-night thing is, is I’ll watch probably like a couple of hours of YouTube videos, often in the background while I’m coding. That’s how you’ve seen our talks. I have.

Yeah. Yeah. I’ve seen. Yeah. Okay. And so there’s so much good material out there. And the thing I love about kind of YouTube, and by the way, in terms of like use cases and things that agents that should exist that don’t yet, I would love to, and technology exists now to build this, is to be able to take a YouTube video of like a talk about, let’s say on late in space.

Oh, we’re not late, but on the AI engineer event and say, just pull the slides out for me. Cause I want to put it into a deck for use or whatever, or some form of kind of distillation or translation into a different format, pull the slides out of a video.

So I think that’s interesting. I have. Yeah. So by the way, on the kind of agent.ai thing, like one of the commonly used actions primitives that we have is the ability to kind of get a transcript from a video.

And that seems like such a trivial thing or whatever, but it’s like, like if you don’t know how to do it programmatically or whatever, if you’re just a normie, it’s like, okay, well, I know it’s there, but I can copy it and paste it. But like, how do I actually like get to the transcript for you and then getting to the transcript and then being able to encode it and say, I can actually give you timestamps.

So if you have a use case that says, oh, I want to know exactly when this, because I want to create an aggregate video clip. This was the actual original agent that I built for my wife that she wanted to pull multiple clips together without using video editing software because she wanted to have this aggregate thing to the nonprofit side to like send to a friend.

Anyway, there are video understanding models that have come out from Meta, but the easiest one by far is going to be Gemini. They just launched YouTube support. Yep. So they’re doing good work over there.

By the way, in terms of like the coolest thing AI-wise recently, I’ll say last week to 10 days has been the new image model, Gemini flash experimental, whatever they call it, because it lets you effectively do editing. And just, and so my son is doing an eighth-grade research project on AI image generation, right?

So he’s kind of gone deep on stable diffusion and the algorithms and things like that. I don’t know much about it, but one thing I do know, I know enough about stable diffusion to know why editing is like near impossible that you can’t recreate because it’s like, you can’t go back that way.

It’s going to be a different thing because it’s sort of spinning the roulette wheel another time. The next time you try to, you know, a similar prompt. And so the fact that they were able to pull it off, it’s still V1 because, you know, if you, I, you know, one of the test cases, like, oh, take the HubSpot logo and replace the O, which like this kind of sprocket with a donut, and it will do it, but it won’t size it to the degree that will actually fit into the actual thing.

It’s like, okay. But yeah, but that’s where it’s headed. Do you know the backstory behind that one?

No. Most of Mustafa, who was part of, so they had image generation in Lama 3. Okay. Lawyers didn’t prove it. Mustafa quit Meta and joined Gemini and did it and shipped it.

And it is rumored, and that’s all I can say, is that they got rid of diffusion. They did autoregressive image generation. And I think it’s been interesting, these two worlds colliding because diffusion was really about the images and autoregressive was really about languages, and people were kind of seeing like, how are they going to merge?

And on the MidJourney side, David Holtz was very much betting on text diffusion being their path forward. But it seems like the autoregressive paradigm has won. Like NextToken is… So Hill and Playground are doing like exceptional work on that kind of domain of…

I don’t know if it’s autoregressive, but around kind of image editing and not just the kind of text to image and actually building a UI for like a Photoshop kind of thing for actual generation of images versus just doing text to… It’s fascinating.

I thought diffusion was kind of dead. Like there wasn’t that much, it was just like bigger models, you know, higher detail. And now autoregressive come along and now like the whole field is open.

Yeah. And I think like if there was any real threat to like Photoshop or Canva, it’s this thing.

Just to wrap up the conversation, you have a great post called Sorry Must Pass, which if I did the math right, you first wrote in 2007, the first version. Yeah, that sounds about right.

And then you re-updated it post-COVID. You mentioned you made a lot of changes to your schedule and your life based on the pandemic. How do you make decisions today? You know, has anything changed since you, because you updated this in 2022?

Yep. And I think now we’re kind of like, you know, five years removed from COVID and all of that. I’m curious if you made any changes.

Yeah. So that post, Sorry Must Pass, was the issue that happened is my schedule just, and life just got overwhelmed, right? It’s like, it’s just, I just, too many kind of dots and connections. And I love interacting with new people online. I love ideas. I love startups. There’s a lot.

But as it turns out, every time you say yes to anything, you are by definition saying no to something else. You know, despite my best attempts to change the laws of the universe, I have not been able to do that. So that post was a reaction to that because what would happen for me would be when I did say no, I would feel this guilt because it’s like, okay, well, whatever happened to me, it’s like, oh, can you spend 15 minutes and just review this startup idea or whatever?

It’s like, and sometimes it would like be someone that was second degree removed, like intro through a friend or something like that. And I felt, you know, real guilt. And so this was a very kind of honest, vulnerable, here’s what’s going on in my life. So this is not a judgment on you at all, whatever your project or whatever your thing you’re working on, but I have sort of come to this realization that I just can’t do it.

So I’m sorry, but I, so my default thing right now, and lots of people will disagree with this kind of default position is that I have to pass because unless, and Derek Sivers has said this really well, it’s like either a hell yes or it’s a no, right?

So, and I’m going to, there’s going to be a limited number of the, the hell yeses that I’m going to be able to kind of inject into my life. So yeah, that, and that’s of all the blog posts I’ve ever written, that has been the most useful for me.

And so I, and so, and I send it and I still send it out personally, right? I don’t have a, I don’t automate my email responses at all yet. I don’t do automated social media posts, but yeah, that one’s been very, and I, so I encourage everyone, wherever your line happens to be, I think this, lots of people have this guilt issue and that’s one of the most unproductive emotions in human psychology is like no good comes from guilt.

Not really. And unless you’re like a sociopath or something like that, maybe you need, anyway, you don’t need more guilt. I would also say, so I would just encourage people to blog more because a lot of times people want like to pick your brain and then they ask you the same five questions that everyone else has asked.

So if you blogged it, then you can just hear. So one of the things I’m working on, and their startups that are working on this as well, but I started before then is like a dharmesh.ai, right? That’s just captures. And it’s interesting.

So that’s one of the agents, on agent.ai, on the underlying platform. Oh, there’s a dharmesh.ai? It’s out there. It’s dharmesh.ai. Yeah. Nice. And it’s pure text space, no video, no audio right now.

But the thing that’s like, I found it useful in terms of just the, how do I give it knowledge? So I have a kind of a private email address because a lot of the interactions that I will have, or if I do answer questions, because I, the other thing, by the way, I don’t do any phone calls like at all, even like no zooms at all.

I mean, I’ll get on zooms with teams, but no one-on-one meetings, no one-on-one just doesn’t scale. So I’ve moved as much as possible to an async world. It’s like, I will, as long as I can control the schedule, like I will take 20 minutes and write a thoughtful response, but I reserve the right anonymously with no attribution to kind of share that either with my model or with the world, you know, through a blog post or something.

But it’s been like useful because now that I have that kind of email backlog, I can go back and say, okay, I’m going to try to answer this question, go through the vector store. And it’s shockingly good. And I’m still irritated that email doesn’t do that out of the box. It’s like they’re in Google. I think it’s got to be coming now.

I think they’re finally, the giant has been woken up. I think they’re kind of, clock speed has gotten faster now. You know, it’s one of the biggest giants in the world ever.

Yeah. So, yeah. When I first told Alessio, you know, you were one of our dream guests. I never, I never actually expected to book you because of, sorry, my spouse. So we were just like, ah, let’s send an email and like, he’ll say no and we’ll move on with all day.

So I just have to say like, yeah, we’re very honored. So I’m thrilled to be here. A huge fan of first time, first time guest. Yeah. Thank you for all that you do for the community. I speak for a lot of them. You guys taught me a lot of what I think I know.

Yeah. You should. Yeah. I mean, I am explicitly inspired by HubSpot. Oh, thank you. Inbound marketing. I think it’s a stroke of genius and like the AI engineering is explicitly modeled after that. So like you created your own industry, you know, subsection of an industry that became a huge thing because you got the trend right.

Yep. And that’s what AI engineering is supposed to be if we get it right. Yeah. How do we screw this up? How do we square what up? How do I screw this up? How do we screw AI engineering up?

Oh. You know. Yeah. The common failure modes, right, is so the original thing that makes inbound marketing work, the kind of kernel of the idea was to kind of, to solve for the customer, solve for the audience, solve for the other side.

Because the thing that was broken about marketing was marketing was a very self-centered, I have this budget, I’m going to blast you and interrupt your life and interrupt your day because I want you to buy this thing from me, right? And inbound marketing was the exact opposite. It’s like, use whatever limited budget you have and put something useful in the world that your target customer, whoever it happens to be, will find valuable.

Anyway, so the common failure mode is that you lose that, I don’t think you will, but it’s very, very common, right? It’s like, ah, like now I’m just going to like turn the crank and squeeze just a little bit more like it’s, but you, the right reason I think folks like me, you know, appreciate that community so much is you to have that genuine want to act.

And there’s nothing wrong with making money. There’s nothing wrong with having spot, none of that. But at the, at the core of it, it’s like, we want to lift the overall level of awareness for this group of people and create value and create goodness in the world. I think if you hold onto that over the fullness of time, the market becomes more efficient and rewards that generosity.

That’s my kind of fundamental life belief. So I think you guys are doing well. Thank you for your help and support.

Yeah. My pleasure.

Yeah. And just to wrap in very Darmesh fashion, you have a URL for the Sorry Must Pass blog, which is sorrymustpass.org. So yeah, I thought that was a good nugget.

Yeah. Thanks so much for coming on.

Oh, thanks. Thanks for having me.