Episode 17

CRMs Don't Have to Suck: Rebuilding Business Software with AI and Ruby

with Thomas Witt

CRMs Don't Have to Suck: Rebuilding Business Software with AI and Ruby

About Thomas Witt

Thomas Witt is the founder of Vendis.ai, an AI-native CRM, and the operator of Expedite Ventures, a pre-seed angel collective of CTOs. He is a builder, investor, and Ruby and AI enthusiast.

About This Episode

Many "AI startups" today are little more than thin wrappers around large language model APIs. But what happens when those APIs improve and the platforms absorb those features?

In this episode of The Ruby AI Podcast, Valentino Stoll and Joe talk with builder and investor Thomas Witt, founder of Vendis.ai and operator of the pre-seed firm Expedite Ventures. Thomas shares why he believes the next generation of durable companies must deliver real value deep in the product stack rather than bolting chat onto existing software.

The conversation explores why traditional CRMs are widely disliked and how an AI-native CRM might look completely different. Instead of rigid forms and required fields, Thomas describes a system where conversations themselves become the primary data source. Emails, meetings, and messages are embedded, searched semantically, and transformed into structured knowledge automatically.

They also dive into the architecture required to support this shift. From Ruby on Rails and Hotwire to DynamoDB, vector search, async Ruby, and multi-model LLM workflows, Thomas shares practical lessons from building AI-heavy production systems.

Along the way the discussion touches on agentic coding workflows, LLM-as-a-judge evaluation patterns, telemetry for prompt chains, and why small teams may soon replace the massive engineering orgs we've grown used to.

Full Transcript

Valentino Stoll (00:01.196)
All right. Hello everybody. Welcome back to another episode of the Ruby AI podcast. I'm one of your hosts today, Valentino Stoll and joined by Joe Joe.

Joe (00:09.613)
Hi, I'm Joe. the other host. I'm really excited today because I don't have to be the one to say a bunch of provocative or controversial things because we are joined by Thomas Witt, who in addition to being a builder and a Ruby and AI enthusiast, says plenty of controversial things for the three of us. So Thomas, welcome to the show. It's great to have you.

Thomas Witt (00:18.847)
Ha!

Thomas Witt (00:31.331)
you

Thomas Witt (00:36.312)
Thank you very much, really happy to be on.

Joe (00:39.941)
Thomas, I kind of want to dig right in because you've got a startup that you're going to tell us all about and you're a businessman and an engineer, which I respect because I like to think of myself as that as well. You said something that I thought was interesting recently where you talked about your, it's a pre-seed investment firm, right? What's name of it?

Thomas Witt (01:03.374)
are Expedite Ventures. It's an angel collective of CTOs.

Joe (01:04.651)
Expedite Ventures. And you said, okay, and you said that, and I want to get this right here, you said that, where is it?

All right, here we go. You said that code stripped many AI startups are actually naked, implying or outright saying that, a lot of startups that say they are AI, I'm using air quotes here, are really just wrappers around a codex or a Claude Sonnet API. And I'd love it if you could tell us a little more about that.

Thomas Witt (01:47.874)
Yeah, so basically we started Expedit Ventures around 2020 and we've been to many phases. So we had obviously also a web three phase where we got a lot of Bitcoin and Ethereum pitches. now we're basically two years ago or three years ago now, we got a lot of AI pitches and basically everybody started building AI obviously with that watershed moment of Chetchabit being released.

Valentino Stoll (01:52.46)
.

Thomas Witt (02:17.204)
We simply saw a lot of very thin, I mean, it's now become a thing or even as a chat GPT wrappers basically. And I think there's nothing bad with a chat GPT wrapper because in theory everybody can build anything, but the devil is obviously in the details. So if you execute really well, which is key to most, it really works. basically people are building features and not products.

our observation. you have this like whatever optimization for ads or something. So yeah, most likely Shopify will build that into their stuff or Meta will build that in whatever you do. Or we've seen a lot of like observation startups of how well I am doing on chat GPT. Well, I mean, there's people like SEMrush and whatever. So it's just another feature to build that in. So we think

And now they're talking about the SaaS apocalypse, basically. And we think you really got to deliver a lot of value and really get into the value chain at a very core point in a company to create a product which is really lasting and will not just be replaced by Chih-Chih-Pi-Ti.

Joe (03:15.522)
Hmph.

Joe (03:30.211)
Yeah, and I mean, I think you're spot on. I wrote about this recently, you know, in relation to the tech sell off that you just hinted at with like the, you know, all SaaS products are doomed, right? You know, with the kind of three states where it's like, okay, if you're company that does one thing, well, it's over, right? You may as well pack up your bags because AI could do that one thing. If you're a platform, and maybe, and you're sold into the enterprise, maybe you can hang on for a couple of years because enterprises are slow to rip things out.

Valentino Stoll (03:33.004)
Okay.

Thomas Witt (03:40.109)
Yes.

Joe (03:59.619)
Right. Or maybe you're doing something really well with AI, in which case you can just hang on for a little bit as long as open AI and Claude allow you to exist, which I think is a really ridiculous thing. and and you know, look, markets overreact. That's that's what happens. People overreact to things. But what I'm really curious about and I know we want to get into this on this episode is here you are. You're building a platform. mean, anybody who looks at a CRM, typically they think platform.

right? You think Salesforce, think HubSpot, you think these systems that inevitably branch out into different places in your marketing and your sales processes. So here you are building this. have a stated goal, I would assume, to not be just a feature and to not be a thin wrapper around an existing AI API. So how do do it?

Thomas Witt (04:53.166)
Well, I think first of all, it's important to understand the state of the whole CRM industry we're in. mean, basically every company needs a CRM. So at least if you sell something, which most companies do, but the problem is they all suck. So nobody wants to actually use them. Show me that one person who says, my God, I can log on in my CRM in the morning and it's delightful joy. No, it's not. It's terrible. get like...

Joe (05:20.088)
Yeah, no it's not.

Thomas Witt (05:20.93)
bombard with form fields and have to fill out stuff. Basically, I tried to set up one of those when we started Expedite Ventures just to manage our deal flow. And I said, that's it. I'll just use Excel. That's better than Google Doc. So everybody's frustrated by CRAMs. That's generally a good point to start a company with because usually you're not the only one. And I think we are interestingly at the tipping point for general AI applications because everybody is very much

Valentino Stoll (05:31.931)
Okay.

Joe (05:32.637)
Yeah.

Thomas Witt (05:49.807)
Many people have been focused on B2C apps when it comes to AI. And B2B has largely been untouched. And the existing companies like Salesforce and HubSpot, by the way, they both lost 60, 70 % of their value on the stock market in the last year. if you short it, you're a happy person. Otherwise you might be not. And obviously they see what's coming and the only answer they have is to bolt on a chatbot.

Valentino Stoll (05:58.353)
Thanks.

Joe (06:07.876)
Mm-hmm.

Thomas Witt (06:19.234)
That's basically for me the equivalent to Clippy and Word, if you remember that one. That's what everybody does. And if they are like very fancy, like HubSpots, they build an MCP server. But I think that's not the answer to it. especially in the CRM case, are two main aspects. First, I think just putting everything in form fields doesn't cut it. For example, we treat conversations as a basis of our data structure.

Valentino Stoll (06:22.327)
Okay.

Thomas Witt (06:46.538)
As we are having a conversation now, you're having Google meets, you're exchanging emails, you're exchanging WhatsApps or iMessages or whatever. And that contains a lot of data. You might not even know that you might need it later. And fortunately with AI, we are now at a point where with embeddings and semantic databases and all that kind of stuff and vector databases, computers can actually understand the meaning of what you're saying. So you're not...

You're not like you don't have to fill out form fields anymore. For example, one thing we built in is we don't have any required fields in the CM. We just have fields we don't know yet about. That's basically one one philosophy. So that's important because that's a totally different data model. Whether you say OK, I want to have a zip code. I want to have a name and I want to have an opportunity stage in percent. Rather you say I'll try to see your Google meetings, your Google emails, your meet transcripts and say hey, I see you're in New York next week.

Valentino Stoll (07:21.335)
Okay.

Thomas Witt (07:44.023)
and you have two hours, maybe you want to meet Peter because you have done business with him a year ago, but you didn't for the last year and you made a million with him. So maybe you want to have a coffee with him. Here's an idea for an email. So that basically demonstrates, I think that you have to rethink on the one hand way on what kind of data you're building up. Also that influences a lot the decisions, what kind of data stores you're using by the way, also in Rails. And on the second thing, I think we also need to rethink UIs.

Basically, we are building the product in a way that we don't expect that people will use our UI. Of course, we have a UI and we need a UI at this point, but we think maybe in three years or two years, we will live maybe in ChetGPT or in Messenger or whatever. And I think this whole you can use a B2B app solely by text chat voice will be, in my opinion, a very defining pattern how people use

B2B software and you need to be prepared for that.

Joe (08:49.22)
What you say to that, Valentino?

Valentino Stoll (08:49.579)
Yeah, I'm wondering, you make a lot of great points. And I wonder.

You know, where do you decide as like, as you're diving into all this, like what the most valuable touch point is, right? Like, you know, a lot of people start to question like, should they be building anything for an AI application when it could be just like, you know, one of these model companies could just decide they're going to offer it right. You know, how do you not just decide, I'm going to build for, you know, chat GPT integration.

or something along these lines versus like, what is the value? How do you weigh that value of product building at this point? Because I feel like a lot of people will, they get in that feature building mentality and they're like, well, yeah, I'll just build the features until the model company gives it to me. And then I'll just stop building that feature and then have it for free and still pay the model company. So like, where do you see that like product value and product building?

Joe (09:46.562)
Right.

Valentino Stoll (09:59.347)
like really translate for you.

Thomas Witt (10:02.447)
think, mean, first of all, the very old saying is still true that people massively overestimate the short-term impacts of technology and massively underestimate the long-term impact. So there will be huge impacts for everyone of us, in the software industry in the next 10 years. And the next year, maybe...

Not so much because specialized applications are like really hard to build. I mean, we barely have self-driving cars. So if AI is so great, why doesn't it just build overnight Tesla self-driving car software? No, it can't. It's hard. It's really hard to do. And obviously, for example, when we started, we talked, which is generally a great recommendation, which I would do to anybody who started a company, talk to a lot of people without writing a single line of code. We interviewed 300, 400 people.

Joe (10:32.908)
Right.

Thomas Witt (10:51.501)
How do you use your CRM? Or do you use the CRM at all? So you learn a lot at that point and half of them don't need the CRM at all. They're just fine with Excel because it's just a very small company and maybe they saved a lot of money.

Joe (11:04.012)
And also, because Excel is actually great software. Like it's been 40 years. It's great. Everybody wants to dump on it. It's great software. Yeah. Yeah, I agree.

Thomas Witt (11:07.253)
It's great. Basically, every B2B enterprise software competes with Excel. Your main competitor on the slide should be always Excel. That's it. And from the other, you learn a lot. And the main question was, hey, why don't you just build a better front-end for HubSpot and Salesforce? And that's the short-term reaction. And we could have obviously done that because, example, you can talk with our system and all that stuff, and we could have put that into, MCP to HubSpot or whatever.

But it doesn't cut it because first of all, you're not really building a platform and you're not owning the data. And therefore you have a very limited understanding of the data. And what I just said that the conversation is the basis for everything for us that simply is not possible with a data model of all legacy CRM vendors. And I think we are going to like some phase we had, obviously we have the old ones like Microsoft and HubSpot and Salesforce.

And they're all fine for enterprise. the way, for example, we are not targeting enterprise. We think there is so much stuff you have to consider integrating EAP system, whatever. think Salesforce is greater than they just have that market. That's fine. But there's a huge market around that. And there were a lot of other competitors who came up, especially around the data augmentation thing, like ATO, Clay, Orkment, Vflow, and how they're all called.

And I think they solved one thing with agentic stuff that they basically find out more information about the people you're talking to. Mainly that's what most of these systems are about. But nobody really changed that radical thinking about CRMs. Like what if you only have a chat prompt to interact with your system? Hey, show me the sales reports of last month. now it's divided by state and now down to the city level or by a sales rep and now by product.

Joe (12:55.076)
Mm-hmm.

Thomas Witt (12:59.469)
and whatever, and it just gives you the graphics without clicking through interface. And I think this is where we maybe will end up in, because you look at Claude's co-work, that's what people expect now.

Valentino Stoll (13:13.818)
Yeah, you make a lot of great points there. Especially, you I think that this the question of do we build a product has always existed. And I think, you know, 37 signals is notorious for like

proving everybody wrong that you can just like make something well built and have minimal features added to it. And it just works great for your customers that you're trying to serve. Right. And you can have a consistent business. Right. Exactly. And so, yeah, I definitely. So you've decided you're going to build this company and you reach for Rails. Why? I'm curious.

Joe (13:34.723)
Mm-hmm.

Thomas Witt (13:36.089)
fully.

Thomas Witt (13:39.936)
The question was always why doesn't Google build it, for example.

Joe (13:43.3)
Mm-hmm.

Thomas Witt (13:57.322)
First of all, I really love Rails. I got into Rails in 2007. And, funnily, I heard that long six-hour interview with DHH. And I can relate to lot of points, because before Rails, I thought I was a really shitty programmer, because I really hated to get into details about pointers and C and whatever. mentally, I understood it, but it didn't like to do it, to be honest.

Rails really, even though that's a very old face, it really gave me joy in programming. Everything came together and it worked. If you're doing a web application, obviously, I mean, it be different if you're doing self-driving cars. And therefore we were based on Objective-C at that point in the company and we made a big plan to rewrite our whole, it was content management system and customer experience. We'd rewrote it all to Rails and it was very successful.

We were really happy with it and build software which is used by millions and hundreds of millions of users back in the days. then AWS came along and all of there I learned a lot. for example, I was always a fan of non-relational databases for many applications because many applications are not actually relational. Even a CRM is not relational. You have very simple relations in a CRM. You have people who work at companies.

and opportunities who might belong to companies and might belong to people, but that's it. And especially in the AI phase where you basically get back structured output, which is literally JSON, it's really interesting in putting that in the database and then re-indexing it also, for example, in OpenSearch or Elasticsearch at the same time. And then...

basically just understanding it and slicing and dicing with the data, which is really, really hard when you have a relational system, which is not really built for that kind of structured unstructured data, I think. But back to your point, I love Ruby, I love Rails, I built a successful company and out of it, sold it, and there was no even a question. And interestingly, every time I look, I don't want to start a rant here, but every time I look into TypeScript and React and

Thomas Witt (16:20.432)
I always think, how does people get away with sometimes building all these packages which you have to install and then it doesn't work and Ruby just works. It is a very beautiful language and a beautiful ecosystem and with very friendly people. I've just seen it. I've was at Railsworld in Amsterdam and nothing changed over the last 15 years.

Joe (16:30.681)
I

Joe (16:42.22)
You are ranting, which is good. We are pro rant on the Ruby AI podcast, so thank you for that.

Thomas Witt (16:46.16)
Good. No, I don't understand it. We tried basically when we started, tried, said we don't do any JS. We couldn't really hold that through. I really was saying, no, we don't want yarn. We don't want NPM or whatever in our system. We have to now for certain stuff with like Tailwind, but we still have that philosophy of

Valentino Stoll (16:46.613)
You

Thomas Witt (17:09.276)
never using JavaScript unless it's absolutely necessary with stimulus and whatever. And think it's tremendous what that whole stimulus hot wire ecosystem has produced. It is so amazing how you can work with that stuff. And it's just great. It's awesome.

Joe (17:27.148)
I'm curious, I would like to drill in on the, on sort of the architecture. you've talked, you've spoken about, Dynamo DB. So you seem you're a proponent of, of no SQL, your proponent of non-relational databases. And you also, wrote a little bit about, the, the product you just shipped, the open source library you just shipped, which is the AWS SDK.

Valentino Stoll (17:52.554)
Okay.

Joe (17:54.045)
HTTPS async or HTTP async, sorry. And I'm curious to know, so what, what problem were you seeing in production that made this necessary?

Thomas Witt (18:05.008)
What's really interesting is, what I like about DynamoDB is a part, it's a non-relational database, which is suitable for the kind of data we have, is you don't have to deal with managing the database. I think that's a point people, in my experience, running at least in a scaling B2B app, constantly underestimate. I've seen, but it all works as Postgres. Yeah, it does, obviously, but...

really need to have somebody dancing around and it works until it doesn't. So then you forgot an index here or it doesn't scale here. And then you need to think about a lot of things. And the beauty about DynamoDB is if you design it right, you can throw literally petabytes of data on it and it will scale if you exactly know what you're doing. And that's what I like about it. What I had to look into is when you...

We first used Ruby OpenAI gem and then moved to Ruby LLM because there was a new hot kit on the block and it still is. And one of the things that proposed was the use of Async. And that's really interesting. And that is something I actually haven't touched a lot. I knew it existed, but well, but it became very clear that you have to deal with Async when you're building a modern Ruby application with AI because it's not CPU intense.

Valentino Stoll (19:04.969)
Okay.

Thomas Witt (19:27.734)
It is, you're basically waiting for HTTP calls to LLMs all the time. And that's when I had to relearn a lot of stuff I learned in Ruby actually. So I learned using threads is actually bad. mean, I was, or just using simple pipe, pipe equals operator obviously is very, very bad. That was already bad in threads, but I have to now to use fiber with key or.

Valentino Stoll (19:42.013)
Okay.

Joe (19:42.862)
Hmm.

Joe (19:46.179)
Mm-hmm.

Thomas Witt (19:55.737)
I used to have to use concurrent maps or sleep doesn't work because it blocks the reactor. I have to use async sleep dot sleep. So I felt I had to relearn a lot of like idioms which were like totally naturally for me for the last 10 years. And I must say RubyLM brought that to me and we are running, for example, Falcon in production as a web server, which is the web server of the async.

Valentino Stoll (20:02.242)
Okay.

Thomas Witt (20:22.833)
And everybody says, yeah, it's so easy. And apparently Shopify uses this, when you really use it in production, there's a lot of like very undocumented feature to put it mildly or bugs. And I totally admire the work this, um, that Socrates guy has done, but it's far from, oh, I just install it and then it works. Um, I recently just updated from like 0.28 to 0.29 and it totally crashed because of a different behavior. So anyway.

Valentino Stoll (20:49.169)
So Okay.

Thomas Witt (20:52.697)
And to be honest, AWS support of Ruby is okay-ish, but not great. And because I don't think they have a very big team, they're very responsive and they're very nice, but I don't think they put a ton of resource in it, which is a shame. So shout out to AWS, put more resources into Ruby and not that much into TypeScript, Go and Python, I would say. And the main problem is that DynamoDB also is basically a service which you call by HTTP.

basically simple the same. And I saw there's a lot of stuff that blocks each other or simply does not wait correctly. And it's really, hard to debug because it's a library you don't own. So I basically try to make some kind of patch to support more modern libraries. And there's this async HTTP library, which works well, but it's really hard. So I think many Ruby developers have to relearn a lot of things when it comes to async. It's my observation. I had to.

Valentino Stoll (21:51.593)
Yeah, I feel like it's not just async. I feel like we're all relearning how to build applications because we are leaning more and more on this kind of LLM, know, high IO, IO bound workflows.

Thomas Witt (22:00.646)
Yes,

Valentino Stoll (22:10.044)
You're right. I feel like some web servers, we won't name names, but some web servers just aren't built for heavy IOUs. And that's acceptable. that was where things were before, is we didn't have a lot of IOU bound tasks other than your database. And if you can couple that to your user,

then it can work well for very specific, you know, but for very specific servers. And we're kind of like shifting to like this new way of serving customers where, you know, the user isn't bound to that request or the data and many things are involved in accessing the data at the same time in a more async fashion. And so we're kind of like, yeah, doing the stance of like,

Joe (22:54.436)
Mm-hmm.

Valentino Stoll (23:04.84)
relearning what the best conventions and configurations are. And that's going to even change, right? Like maybe everybody's workflow isn't making the most use of like these LLM calls in the same ways. And it might not make sense to just use Falcon, right? Or using Async for everything. And so it is going to be like a long process of finding the best use cases for how much you're leading into what. And so I guess my question for you is like, at what

stage did you decide, like where, how are you mapping this out when you're building the thing? Right? Like, did you decide going into it that you kind of knew that you were going to be so heavy with all of LLM use and chaining or that you had to make use of Async or did you find that out after the fact? Like where, where in your building process did these kind of shifts change?

Thomas Witt (24:00.134)
had to basically, because before we were using Ruby OpenAI gems, so that used threads and it somehow worked-ish, but it wasn't very natural. But the thing is, for example, what we are doing is we are often not just firing one prompt, but we're firing a magnitude of prompts when we get in user input. For example, when you're talking to the system, there's something, obviously we have to first interpret what you're actually saying. So we get basically a stream of data back.

At the same time, we send that stream data of data back to find out what are the entities in that stream. Joe is a person, Valentino is a person, Ruby is something else or whatever. So that gets sent back and forth. And when you're then ready with talking, we try to immediately show an executive summary. So what we understood is you met with ABC, you talked about blah, blah, blah, blah, blah.

That means it's another back and forth with a different model because it needs to be a faster model, which is like better on summarizing stuff and so on and so on. They have another model which tries to get tasks out of it and try to understand, okay, I met with him. What's the next steps? We want to present him like suggestions, what you do. And that's when it became really clear that just even one conversation with the system fires a lot of like different.

Yeah, different prompts, different interaction with different model providers. It's not even just OpenAI, it's also Gemini and others, and there's text to speech and blah, blah, blah, blah, blah. So that's when we found out we need to look into that async stuff on a way, because if you take that from a multi-tenant perspective and see multiple people are doing that at the same time, you can't make them wait for so long.

Valentino Stoll (25:38.569)
Yeah, that makes a lot of sense. it makes me think too, you know, we're almost like, you know, Ruby kind of came from this like, you know, list small talk phase of language building very

trying to be object oriented as a base. And, you know, do you see like that your programming of Ruby leans more into that style of programming? Like has your use of Ruby changed based on how these things are evolving and like in what ways? Like, do you find yourself still making service classes? Like are those like tricks still in play or do you notice new kind of patterns evolving?

Thomas Witt (26:21.319)
yeah.

Joe (26:28.132)
Hmm.

You

Thomas Witt (26:34.798)
No, I mean, what I definitely see is there is a lot less stuff going on in models and a lot of more stuff going on in services and concerns to these services and stuff. that is definitely a total difference of what we did before. some, for example, although when we composed the prompts, I was like very Ruby driven at the beginning. So I basically created a class, LLM service user import, which inherits from LLM service.

And then I included LLM prompt with context, LLM prompt with actor, LLM prompt with entities, with performance indicators and whatever. And it simply wasn't manageable. So the classic inheritance stuff didn't work, but there was stuff which worked. So for example, as you said, we did our own prompt components, maybe we turned it into a gem and released publicly, which is simply like view components. we could...

learn a lot of how view components were built and building from components because it's the same. They can all inherit from each other, but not all at the same time. They get inputs, they get outputs, they have a part which does calculations, they have a card which just generates a markdown. So that's really interesting to do that. So that changed a lot. But this whole service orchestration and having an overview which model calls what at a certain point is

really, really tricky because it's not just like the model. It's like you have agents, have two calls, which are basically classes in itself or subclasses, which might be or might not be different between different services. have all the, what many people don't do at the beginning, a lot of telemetry and tracking. So for example, we have big chains of Langfuse. Shout out to them. Great product. Everybody should use it.

It's not that easy to integrate in Ruby if you want to do it well, but that's not on Langfuse, that's on Ruby. Because you have traces and spans and exactly what I described. You have one, let's say, one thing or one input, and that triggers a lot of different prompts and you want to see what happened at what point. Then comes the next model out and you want to see, what would have happened if I ran that with not 4.1 but 5.2.

Thomas Witt (28:53.156)
And this is when it really gets tricky. And this is like the whole, which is not totally Ruby related, but basically our whole M-code do many, at least 10 things. And then comes chain invalidity. You change a system prompt, you can't continue conversations in the same system prompt. You want to design them that they are casual by OpenAI and so on and so on and so on. And that is really hard. I don't think I have fully found the answer to it. And, but my application or our application vendors looks now very different.

Joe (28:53.197)
Right.

Thomas Witt (29:23.154)
than a various application I might have done like five years ago, I would say.

Valentino Stoll (29:28.992)
So how do you see that scaling? Do you have to rethink your scaling methodology as well? Is it not just vertical over horizontal scaling? Do you find yourself maybe considering serverless more because it maybe aligns with your objects more?

any of that kind of change in your building process or is that pretty much like the same considerations?

Thomas Witt (29:57.093)
No, don't think the scaling is at least our problem because the way we build it. So DynamoDB scales like there's no tomorrow. OpenSearch also on AWS also scales like no tomorrow. it will take a lot of revenue until we get to limits on that point. Rails also scales. Falcon is really fast with that async stuff if you once got it managed. And if you have these auto scaling, you got all our infrastructure totally automated.

So it scales up and down and we have different environments and blah, Although we are using very classic patterns of queuing. So we're using SQS on AWS, but could be also Redis or whatever. So basically every time when we call an LLM, it fires a background job which gets picked up by a class or a fleet of workers. And then the workers...

might update your turbo stuff in the front end, which is a kind of a really interesting thing, but it really works well that pattern. So we really try to keep the web front end as lean as possible and try to do as much stuff in workers. And I think that pattern scales very well. And that's not nothing new in Ruby. So that's like very pretty standard stuff with sidekick or whatever people have been doing for decades. And so there's a lot of

patterns which are very natural to Ruby or the ecosystem, which come very normal when you're building AI applications. I felt I didn't have to bend backwards just to get something done. It was always clear in what way you do it. I think it's just, you have to think how do I do it that I keep it maintainable. That's the main point. And not just for me, but also for agents. You said you talked with a lot of people like about how they build it. And obviously we use also a lot of agentic coding.

And that requires very strict discipline about documentation. So we have a huge folder MD docs where we basically document every feature we're doing in detail. Maybe we should have done that 10 years ago in all applications as well. But that keeps it a bit more manageable. Even if I don't remember, how was that actually built? I can just ask Claude or Codex, hey, and it gives me an answer in like 30 seconds. So that's good.

Joe (32:03.012)
Haha

Valentino Stoll (32:05.603)
You

Joe (32:18.328)
Hey, hey, I'm curious about that last part because you you had talked about.

sort of Claude code rules and how there's a potential there to improve style of code, right, which I think is I like that viewpoint because there's so much there's so much focus on, you know, just churning out code and not even looking at it, let alone kind of observing, you know, style or.

Thomas Witt (32:33.587)
Yeah.

Joe (32:48.098)
design as it evolves. Now you just mentioned that you use a lot of these rule files, in MD library. Are there other ways that you enforce style, for example, through repo conventions or through the tests themselves?

Thomas Witt (33:00.977)
Yeah, absolutely. mean, I was always a pain in the ass when it comes to formatting code. So I hate it when stuff looks different. So the first thing I did is put a CI pipeline on. We're using bin CI since we're at one. And so we have one linterface and it calls a lot of different linterface. We're using her. Big shout out to Marco. Great library. We used EABlint before.

Joe (33:07.148)
Yeah, me too.

Thomas Witt (33:29.171)
We're Rails formative, we're using Rufo, we are using RuboCop. So basically every code gets checked three or four times and it's built in all of our agent rules. You must not deliver any code before a Rake app formatter is actually run. And you can't even check it in both on GitHub as well as on the deployment if there's even the Rufo or RuboCop or whatever style rules don't match. I think that's super important. And it might sound a little bit anal, but

Joe (33:53.561)
Hmm.

Thomas Witt (33:58.708)
It's really helping to have a very consistent code base. Although look at these like Ruby 3.4 with this it syntax, for example, I really like it. So I said, I want to have the syntax consistent in all the code base and that's task which can be really done well. I think we put a lot of work in the orchestration between Claude and Codex because in our, just for our use case, I don't know what's for everybody, Claude is really great at planning.

But Codex is in our experience much better in implementing. It follows the rules of the actually implementation much, much better. But obviously the application itself is better in Cloud. So what we did is we created a lot of like agent and skills to do that. And interestingly, it was a bit of an investment in the future because you had to nudge Cloud that actually uses it and with every release it uses it more. So it's better. All these agent teams, they even wrote.

Joe (34:31.02)
Interesting.

Thomas Witt (34:53.893)
shell script, which said, okay, you touched controller. And which reminds it, you have not run that skill. So you need to run that skill to check whether it complies to our whatever rules. And you really have to put in effort. You have to understand how these tools work. If you have a bad cloud and agents MD and a bad, bad set up with that, I think you're not getting very far. If you have a huge app to maintain.

So that's where I'm putting a lot of effort and we have a lot of like standardized slash commands. So I can say, say, when this plan and Claude that calls, that does this plan and then mandatory calls codecs. can run codecs as an MCP. I even wrote a small NPM package called MCP agents where you can call Gemini and it says, if you disagree, don't even ask Gemini about the cert opinion. And what you get back then when they all figured it out with each other is like really gold, especially.

as rails are so structured in terms of what goes where. And that really works out extraordinarily well for us.

Valentino Stoll (35:58.555)
Yeah, what do they call it? The conferring of judges or something like that? Where you get the group to group think. Yeah.

Joe (36:03.492)
Yeah, yeah.

Thomas Witt (36:03.858)
of that. Yes.

Totally. There was a big Shopify talk on the last website about that. I think they put it to a totally different level. We are not like that elaborated, but for instance, Langfuse also has features to do that LLM as a judge. And I think that gets more and more important. And it's not just about the coding side, but also when we get back an output, like an important output when it comes to like money or whatever, you might want to double check that with a different model.

Valentino Stoll (36:32.868)
Mmm, yeah. You make a great point.

Joe (36:32.93)
Yeah. I, you know, I do that too, just with, I do that with text. I do that with like strategy and stuff like that. Cause something comes back and it's like, okay, well I, I think this is smart, but it's going to outsmart me because it's, it's a word calculator. It knows how to put these great words in place. So I take it, you know, whatever it might be. And I'm like, all right, well see what a Gemini has to say about this. Let's see what Claude, you know, has to say about this. Let's see what, you know, what open AI has to say about it. Yeah, it really does work.

Thomas Witt (36:58.308)
But I see, mean, Codex finds in 90 % of the cases, it finds something where Claude says, great findings by Codex. I haven't thought about that. Let me put that in. And that is often, especially all stuff is.

Joe (37:06.06)
Yeah. It's funny that you get that because a lot of times I get like, that LLM is wrong and here's why. Which I love. you're just throwing shade at Claude or whatever. It's so funny.

Thomas Witt (37:13.288)
Really?

Thomas Witt (37:19.06)
you

Valentino Stoll (37:19.206)
I wonder if there's like a competition MD file that's secretly stored somewhere. They don't do too great at these points, so we're going to save this for later.

Joe (37:24.26)
Yeah, yeah. Exactly, yeah.

Thomas Witt (37:27.028)
and

Thomas Witt (37:32.223)
Maybe, but especially when it comes, for example, I could have never found all the problems or stuff we did wrong with fibers if I wouldn't have like a rule or an agent in that, which checks it. on five, you have, at least I'm not smart enough to do that. I have no chance to find all these edge cases where, you need to put that in a fiber because that could have a concurrency with ABC. No chance. And it really is so much better in terms of code quality.

Joe (37:42.169)
Right.

Joe (37:55.31)
Mm-hmm.

Valentino Stoll (38:00.58)
Yeah, so I'm curious, it seems like you're investing a lot in the coding agent help, which is a common pattern that's evolving as well. Right. And so how do you see, like, do you see yourself having to manage that a lot? Or is it like, once you get the good, like mechanisms in place, it kind of just starts going on autopilot a little more.

Thomas Witt (38:10.931)
Yes.

Valentino Stoll (38:29.594)
How has that workflow been as far as onboarding new members and getting other people to use the tooling? Is it pretty smooth sailing, or do you find yourself circling back to, like, then you're just working on all the tooling?

Thomas Witt (38:43.412)
think it's bit of 80-20. You always have to force yourself not to over-engineer that stuff because there's always some new feature which you could, I didn't try that. And if the agent calls now the skill and blah, well, you can overdo it for sure. we did also try it with stuff that we have a Ralph loop, which basically constantly loops over the whole code, gives it to all the things, out, produces a big file. And when it's done, it starts from the beginning.

Valentino Stoll (38:53.766)
You

Joe (39:02.508)
Yeah.

Thomas Witt (39:11.134)
Sometimes it's good, sometimes it's not. So I think you can totally overdo it. I think it's important for an onboarding of new people to have some kind of standards in that. It's really hard. So we always say, we plan with our vendors minus plan or rails minus plan slash command, which calls Codex, because it simply makes the output better. So that's important. so I think...

For example, we're a really small team. are like my co-founder and I and our first employee. And we found it really hard to find employees. I was also at Redsworld looking around, hey, maybe there's some person I can work with. And I found there's either junior people who are like totally hyped about AI, really lack a bit of like, maybe it's that gut feeling. no, that architecture feels wrong because you don't have the experience.

And so many senior people I talked to said, oh no, I don't know. It's not as great as if I had chosen the code myself. And so there's a big disconnect. I felt like, I mean, I'm in my late 40s. So I thought I'm sometimes like not the most flexible person, but I met a lot of people where I thought, okay, this is like when I met a COBOL programmer, like in the early days when we were talking about Rails.

Joe (40:32.781)
Mm-hmm.

Thomas Witt (40:33.48)
And I think there will be people either you adapt to that because it's so good and it's going to get better every day. Every second week comes out an innovation, kind of blows my mind. But I tried that agent team features of cloud code and it started five team box windows and did different evaluations and came back and said, this is your problem. Said, okay, wow. Good luck. So I think there has to be a certain recalibration of mindset in the.

Joe (40:49.219)
Yeah.

Joe (40:55.0)
Yeah.

Thomas Witt (41:00.916)
program as community and I mean apparently even the HHV calibrates so that means a lot. I don't know what's your opinion on that.

Valentino Stoll (41:10.276)
Yeah, I'm torn because I lean heavy into all of it. But I've also lived long enough to have seen the alternative. And so there are benefits to both sides. sometimes waiting can be more fruitful in some things. And so guess I try and find, I'm having a hard time finding that balance myself.

Joe (41:16.153)
Yeah.

Joe (41:35.161)
You know, nobody on this podcast is young or mentally or physically flexible. However, we all lean into this because to me, it makes me feel like when I was a younger, less experienced engineer with a ton more to learn. And now, yeah, I know that there's always a ton to learn and I'm no expert at...

nearly anything, but I am, but I have thought highly of myself as a software engineer in the Ruby world. And now I look at it and say, wow, there is so much to learn. think Thomas, that your example of both learning that you had a lot to learn with respect to fibers and asynchronous Ruby and using, AI to in some, to some capacity, boost your learning and to another capacity boost the output.

is a perfect example of what I would shoot for as an IC, as a contributor to a project. There is the threshold where it's like, I don't know as much as I thought I did. And hey, here's a bunch of tools that help me to not only learn it, but also use them to continue to be productive. I really don't see a downside.

Thomas Witt (43:00.179)
Yeah, yeah, I agree. I agree. That was so smart.

Joe (43:01.102)
Show's over. That's it. Go generate some code.

Valentino Stoll (43:02.805)
I'm like dropping. So let's dig into the maybe, because you did mention maybe adopting some new technology like Falcon, right? And it wasn't exactly straightforward, even based on your past experience, right? It still isn't. I'm curious first, what are those?

Joe (43:14.083)
Mm-hmm.

Thomas Witt (43:14.548)
Yeah.

Thomas Witt (43:19.145)
It still isn't, yeah.

Valentino Stoll (43:27.489)
areas that are pain points working with it, like figuring and maybe getting used to the new style of working with a web server. Like what aspects of these facets of your traditional Rails development over the past, I imagine decade or more, probably more. What facets are the

Thomas Witt (43:47.775)
Yeah, longer,

Valentino Stoll (43:55.936)
bottlenecks for you? are are like challenges? Like where could somebody maybe more junior be more beneficial because they don't have that backlog of experience maybe causing some friction?

Thomas Witt (44:09.607)
Yeah. So, I mean, obviously we talked about fibers and async and that's clearly the biggest thing. And not just from a technology perspective, when do you write which code, but what actually happens? For example, there is a lot of places in the code where the same data gets written to the same objects. For example, the executive summary might come in first or the expected entities might come in first.

But you all want to have it in the same object. stuff like if you were in relational database, well, transaction, but even a transaction might be blocked. So what do you do if the data has been updated? Are we talking about the same data or not? For example, we are also never deleting data. We are having version data and we can come back. We have another LLM which basically tells in natural language what has been the difference between those versions, what actually changed. So that kind of stuff.

I think you have to very much get used to that at the same point data can be written from multiple places. At least that's for us a different point. before you could say, okay, I collect all the data, just do one transaction and then it's written. And that's, I think, not the case anymore. So data can come in at any time from any agent, from any long running job, from any deep research or whatever. And you sometimes deal with stale data, sometimes even stale prompts. And what you all in that...

terminology had to rethink as we are building obviously a multi-tenancy app. So that takes it to some kind of a 3D chess level because everybody else has different prompts. Somebody who sells drinks as a customer has totally different prompts customized to that person than somebody who sells SaaS software. And also like totally different data structures because that first person maybe, or let's say you have a service rather a product.

Valentino Stoll (45:57.798)
.

Thomas Witt (46:02.261)
You have a day rate and you have a product budget and whatever, whereas you have a churn and with SaaS and a monthly plan and all that kind of stuff. the data is vastly different from client to client and all of the prompts are. And keeping track of that and that multiplied by which model are they using is really tricky. And there's all the not a lot Rails does for you in the terms of multi-tenancy stuff.

So that there's a lot of things and then making that observable with length views or whatever and tracking everything because we are tracking everything because we're saying the conversation is the basis of our data. So if you get an email, we have to pull it back maybe two years later and reanalyze it for a certain angle which the client wants to know about. So where do you save it? How do you embed it and whatever? And that especially in a multi-tenant context is

Valentino Stoll (46:46.309)
you

Thomas Witt (46:58.901)
complicated I would say.

Valentino Stoll (47:01.499)
So would you say you're on board with DHH's new proposal for getting every customer their own kind of server in their closet kind of set up?

Joe (47:13.742)
Yeah

Thomas Witt (47:15.628)
Yeah, that's a new idea. let's get back to installs. I mean, when I started my company in like 1999, I was sitting with CDs and data centers installing software. So if you're going back to that, I don't want that back to be honest. That was terrible. That was terrible. No, I don't think so. I think this again, this multi-tenancy is a thing. And I think also the orchestration stuff, for example, we

Valentino Stoll (47:18.4)
Ha ha!

Valentino Stoll (47:26.565)
That's a good point.

Joe (47:32.686)
Hahaha.

Thomas Witt (47:42.315)
We think about a lot of like embeddings and how to logically understand data. And for example, open search and elastic search made a lot of advancements in order to combine classic search with like the actual understandings of something in terms of embeddings, vector search. And I think that's something which is barely touched or I wouldn't say understood, but explored by the Rails community so far.

and will become much, much, much more important because it won't be just about the relational data. And that's basically also my big, big rant I have about like, let's say the direction of Rails in general. It's very focused on active record. And as we are not attaining active record, you wouldn't imagine how many libraries we find, which somehow more or less require active record. And there's already an alternative. There's active model.

It's very easy to turn ActiveModel into any database, but now they have to have these foreign keys, they have to have these transactions or whatever. And I think that would be something, even with all this new solid queue stuff and solid emailing stuff and what they all build, that is all tied to ActiveRecord. And I think we have to get used to that. mean, not everybody has to run DynamoDB, having a vector database, open search or something on the side of it.

Joe (48:39.245)
Mm-hmm.

Thomas Witt (49:03.006)
will become very normal if we deal with all that embedding because literally what our system does is translating structured output to unstructured output back and forth all the time. And simply act of record is not built for that. You end up with an SQL formed field disaster like we talked about HubSpot in the beginning.

Joe (49:20.736)
Right. So I actually, I'm curious about that because you've, you've mentioned Dynamo DB a couple of times. And so the last time I've, you, I used a non-relational database and a rails app was not, not that long ago, but I, I remember there being real, a lot of overhead and sort of some unexpected overhead with, instrumentation, right? Like orchestrating, applications to talk to each other, similar to what you're describing. So do you find that that is a significant

Sort of headwind in development against a no SQL database. Yeah.

Thomas Witt (49:52.789)
Yes, totally. is totally. mean, it's first of all, when people come and try to learn our code base, it's really hard because the usual patterns do not apply. And you see that with LLMs, you really have to teach them no active record in every cloud MD. And then it answers, yeah, we have to show you.

Joe (50:06.7)
Right, that assumption, just like with humans, is gonna be there. Right.

Thomas Witt (50:10.198)
Exactly. And everybody behaves the same. So that's really hard. And I would say that's a kind of a blocker or not a blocker, but something which is not so straightforward in our development. And it's not just us with DynamoDB. If you use MongoDB or whatever, you will run into similar problems. So it's not just this like, I don't like AWS anyway world. And again,

Joe (50:26.508)
Yeah, right. Yeah, mine was Mongo. Yeah.

Thomas Witt (50:33.582)
In the end, open search or elastic search was all the nothing else like a large DynamoDB or a large MongoDB in a way. There is significant headwind because a lot of things you take for granted from very simple helpers to, let's say, single table inheritance. You simply can't have a class which has another class and then you query the base class and you get all the results of it. That simply does not work because it does not know how to do it.

So I looked into a lot of like active record code or code or in investigated together with cloud code to understand what did rails do. And often it's very smart decisions, but unfortunately they're very tight and to that relational database thing.

Valentino Stoll (51:16.215)
Yeah, it's a double-edged sword, right? The conventions are made to make it easier for LLMs. But if you want to do your own thing, that maybe it's just a little bit outside of the conventions. You're focusing on that. Yeah. I was just going to say, you're focusing on those customizations, so to speak.

Joe (51:23.64)
You

Thomas Witt (51:28.758)
Right. Yeah, but now look, there was a lot of stuff. Sorry. Sorry, you go ahead.

Thomas Witt (51:40.087)
But in general, mean, for example, when I started with my career, like back in the middle nineties, something which was very popular was functions in databases, PL SQL, if you remember that. That was the most horrible thing you could do because you write something in database and totally believe totally unexpected. So the worst thing.

Valentino Stoll (51:56.58)
you

Joe (51:57.903)
Yeah

Thomas Witt (51:59.303)
And still that's what I don't like about relational databases. They work great until they don't because there is an index missing or there's something or this transaction blocks the another or whatever. You have no idea with inner join, outer left join, blah, blah, blah. You really have to be an expert sometimes. And I think a lot of stuff which is put into the database could be also very well solved in a model.

or in service in Rails, like especially when it comes to like verifications and stuff and how you, so if you go down to very primitive types, DynamoDB only has basically lists and strings and numbers, that's it, that makes your life easier and all that makes your code easier. So there is advantages of doing it that way. And I hope that the Rails community at least acknowledges that there is something like Active Model and stuff can be solved differently. And I think, again, with that AI stuff, I think it will because

Basically, you're only dealing with JSON all the time and good luck mapping that JSON to a relational database scheme all the time. It's not working.

Valentino Stoll (53:00.693)
Yeah, it's funny, know, there's a popular alternative to Active Record for relational databases called SQL that maybe even predates Active Record in some ways. And it works great. And you try and use that kind of in your own application and try and make agentic use out of it.

Thomas Witt (53:12.491)
Yeah.

Valentino Stoll (53:27.084)
It's going to struggle a little bit in comparison, right? And so I, I kind of wonder, do you have any ideas here on how we can like better integrate with these agentic coding use cases for these maybe like one-off conventions that are conventions, right? Like they are popular, like, you know, sometimes the use case does apply, right? And so how do we maybe

Thomas Witt (53:30.199)
20.

Valentino Stoll (53:54.053)
make things better so that it can work for these conventions and set up maybe some kind of convention or integration or tooling or something that we can establish the patterns and make it easier.

Thomas Witt (54:03.521)
Hmm.

Thomas Witt (54:06.871)
Yeah, that's a good question. And I thought about it lot, or I think daily about it when we are building vendors AI. So I think we have to rethink the way we deal with services because regardless of the relational versus non-relational debate, most of stuff is an HTTP endpoint these days. So the emails we are getting as in...

as in S3 obviously, but then you have own endpoints at OpenAI where, for example, when you put a load of a batch, you have to fetch that batch. So it's another endpoint. And then you have your database and then you have your telemetry services, which tracks all that. And you have five different model providers you're talking with and then you have an open. So clearly that pattern, I mean, you can all include it and it works well with like service.call.blah, blah, blah in services. So it's not bad. It's good.

But structuring it and keeping track, I think there must be more to it. So I think we need to have some joint brainstorming in the Rails community to think how to make services better and maybe tie services more to HTTP endpoints. And you do a lot of like glue code with like retry stuff and every library is different and you couldn't reach it.

capacity limit reach. you do retry on that stuff and Google handles differently than OpenAI and blah, blah, So that's not great. And I think there is more to it. I have no idea how to do it or what would be better. But I think that needs to be a direction where we should be thinking about.

Joe (55:42.5)
Mm-hmm.

Valentino Stoll (55:43.928)
Yeah, I don't have a solution. I'm always looking for people to answer it for me, you That's my overall goal of this podcast is to just find people that maybe have the answers for me. And you know, you actually do have a lot of answers out there, you know, related to Falcon and Async and, you know, all of your new adjustments. So yeah, I mean, it's exciting to.

Thomas Witt (55:46.129)
No, me neither. I'm not smart enough for that. But maybe somebody else does.

Thomas Witt (55:57.184)
Yeah, that would be great.

Joe (55:57.893)
Yep.

Joe (56:09.955)
Yeah.

Valentino Stoll (56:10.125)
you know, to hear where the world is going because everything is shifting every day. So, yeah.

Thomas Witt (56:15.221)
It is, totally. It is.

Joe (56:18.084)
Yeah, I agree. like the, I like the, the sort of, I'm going to use the overloaded term agile, but your agile approach, by which I mean agility. don't mean a framework, right? To the constraints that you're facing and the opportunities that are there.

Thomas Witt (56:36.535)
Totally. mean, we couldn't build Venice AI with traditional ways because there's so much stuff to orchestrate and it is the typical 10x. I definitely feel it. I know whether it's 10x or 8x or maybe 6x, but with a three-person team, you can get really, really far. And I think that's when Sam Alton wrote this quote, oh, the next person will be one person billion dollar company. Maybe not a one person company, but I mean, hey, come on, WhatsApp built a billion, multi-billion dollar business with I think 30 people, 40 people.

So what I definitely think is the time of like huge teams is over. And you can't tell me that sometimes people have like 200 developers. What do they do all day? I don't know. I mean, do they...

Valentino Stoll (57:18.339)
Okay.

Joe (57:18.724)
In fairness, I never really knew.

Thomas Witt (57:23.647)
No, it was totally, totally. mean, there are e-commerce sites which employ 600 developers. What do they do? I mean, it's mind blowing. So I think like really having small teams which all understand the code base, which can commit, which can deploy like any time and which have all these modern tools at your disposal. mean, for like $400, you get both OpenAI Max and

CloudMax subscription and gets you very, very far. So I think it will be the age of smaller companies doing a lot of like great software where it would have needed to be Salesforce.

Joe (57:55.737)
Yeah.

Joe (58:06.468)
Mm-hmm.

Valentino Stoll (58:07.92)
Yeah, you know, I definitely feel the, uh, the ex developer, whatever, whatever number that is. Uh, and, uh, for me at first, it was definitely, uh, well now I get to complete all my side projects and ideas that I have, right? Uh, but at some, you know, at some point that transition will, will

Joe (58:20.878)
Yeah.

Thomas Witt (58:22.195)
Exactly, that's a good thing as well, right?

Valentino Stoll (58:27.725)
fall off a cliff and you'll run out of things that you want to side project. so I wonder what that world looks like, you know, when people are kind of exhausted by the maybe speed at which they can experiment with things and maybe what draws their attention then, right?

Thomas Witt (58:32.408)
Thank

Thomas Witt (58:43.096)
No, no, no.

Thomas Witt (58:49.548)
I think there's so much software left to be built. I mean, just I don't know how it is in the US, but just take a look at the government. There's so many processes which could be digital where you think I have to fill a form out for that. If just that would get better, our lives would all become exponentially better. So there is a lot of stuff which is not digitized yet. And I think even maybe the software where we need a lot of consulting and whatever. So I think the role of all these

Joe (59:03.224)
Mm-hmm.

Joe (59:06.722)
Hahaha.

Thomas Witt (59:18.348)
consulting agencies will change as well. But I strongly believe there will be the need for software like CRAM and Basecamp. I don't think that OpenAI comes around and now you manage all your products in OpenAI and nobody uses Basecamp anymore. It's really hard to imagine. And sometimes it's so specialized and there's so much domain knowledge in it.

Joe (59:37.924)
Yeah.

Thomas Witt (59:41.785)
And sometimes you see how great is, but sometimes you think you didn't catch that. You don't understand that. I'm telling you now for the third time and you're still implementing it wrong because apparently you didn't get the problem. So make no mistakes. This obviously in two years, we will laugh about many of these problems, but good product management and what everybody says, like having good product manager make doing good documentation, all these like very basic skills, which were valid 25 years ago, they, they are now.

Valentino Stoll (59:48.757)
You

Joe (59:48.868)
Heh.

Thomas Witt (01:00:10.412)
more relevant than ever. And if you master them, you just start developing something in like some language because it's so cool and everybody uses React, so do we. I think it can get you very far.

Joe (01:00:25.433)
Yeah.

Valentino Stoll (01:00:25.538)
Yeah, maybe you're right. Maybe we are seeing the revitalization of the Excel era, right? And everybody was once able to solve all their problems in Excel, and then they hit limitations. And now they get a little bit step further, where they can solve all their problems with ChatGPT or something, except for the nice packaging that comes with using a product that somebody has focused a very specific use case for.

Joe (01:00:54.776)
Well, I actually already know, I already know what's going to happen to you, Valentino. I already know the future. First, you'll never run out of side projects. And what's going to happen is that what you deem a side project today is going to be laughable three years from now because you're going to say, my side project is basically rebuilding Salesforce. Your little kind of experiment will grow in size and complexity. That's your future.

Thomas Witt (01:00:55.096)
Yeah, but I mean what?

Thomas Witt (01:01:01.304)
Here it comes.

Valentino Stoll (01:01:08.044)
So true.

Valentino Stoll (01:01:23.884)
It's funny you mentioned that I did make a, finally installed OvenClaw a few, honestly a month ago now. Right. And, you know, I like, have the, I pay for domains. I have so many domains I've had for so long. And I'm like, finally just like, all right, you know, here's the list of all the domains I have, like what's valuable. What could you make a product out of? And then, go build those products. Right. As well, we'll see. I have three pull requests. I have to review that are supposedly done.

Thomas Witt (01:01:29.89)
yeah, here we go.

Joe (01:01:30.894)
Yeah, yeah, yeah, it's fun.

Joe (01:01:36.65)
my God.

Joe (01:01:45.991)
That's good.

Joe (01:01:51.99)
Yeah.

Thomas Witt (01:01:52.856)
Ha

Valentino Stoll (01:01:53.411)
And all I gotta do is enter my Stripe credentials, you know. That's right, well we'll see, you know.

Joe (01:01:56.089)
Yeah, yep. Already there, yeah.

Thomas Witt (01:01:56.761)
It's three billion dollar companies already done. You just need to release them. Yeah, I mean, maybe one thing I would because when I thought about it, I already said it before. I think what's massive is in which quick time these platforms like let's just talk Chetjubete could.

could amass such a sheer amount of users. I don't think even the iPhone, I mean, when I had my first iPhone, obviously I had it on the first day and I was totally excited about it. Many people came out and, oh, but I have my Nokia communicator or whatever. So the adoption curve was not that high because, it was so expensive. It was $600 now, but people pay $2,000 for a phone. So that changed.

Don't make no mistake, they have a huge user base and I'm sure they will try to become a platform themselves where people lock in just like Google was for a long time. And I see it with everyone around me, not technical people who ask every single question to Chetchip E.T. And I think we should prepare for a world in terms of your eyes where we will live in other applications. I mean, we had that a little bit with the iPhone where we had apps which all lived in something else and you see what kind of great market of...

billion dollar companies emerged simply out of that. And I think once they really start to monetize that as a base and make an interface, and they already start doing it with these SDK where you can have like little mini apps for I think booking.com or something. But I think this will inevitably come and maybe the others will also jump on the bandwagon. Meta will say everything now less than WhatsApp and whatever you type will also be routed to 15 different companies, whatever, I don't know. But I think that's the thing we should prepare for to

go away from UIs and prepare for voice and chat as the main means of communication with apps. I think that will be the biggest change in the next five to ten years.

Joe (01:03:51.362)
I like it. Yeah.

Valentino Stoll (01:03:52.511)
Yeah, I'm game You know, we'll have to have you on next year when you're a billion dollar company and we'll have to learn how you scaled everything

Joe (01:04:01.834)
Yeah. And we'll, we will publicly sympathize with you for having to share the billion dollars with two other people. That will be, that'll be the sad part. Thomas, it was really great having you on the show. I, you know, it is a lot of fun talking with you and yeah, we'd love to talk with you again.

Thomas Witt (01:04:03.097)
Okay, now.

Valentino Stoll (01:04:10.262)
Yeah.

Thomas Witt (01:04:11.513)
Exactly. So, post-grade heaven you.

Valentino Stoll (01:04:13.878)
you

Thomas Witt (01:04:24.513)
Yeah, absolutely looking forward and everybody who's looking for a CRAM reach out to me. Happy to onboard VM Private better, but I'm happy to onboard customers. Yes, vendus.ai or write me an email at thomas.vendus.ai and I'm happy to walk you through.

Joe (01:04:31.864)
check out vendis.ai.

Valentino Stoll (01:04:39.542)
If people wanted to find you on social, are you on X or LinkedIn or any of that stuff?

Thomas Witt (01:04:44.409)
I'm on X on GitHub on LinkedIn, ThomasWitt, basically. It's ThomasWitt or Thomas underscore wit, so it's kind of easy to find me.

Valentino Stoll (01:04:52.79)
Gotcha. Yeah. And he's active in the Ruby AI Builder Discord. if you're interested, yep.

Thomas Witt (01:04:56.345)
Yeah, basically that's what I would really recommend to everybody to join that discord, the Ruby AI builders, because there many people there with like, although sometimes very controversial opinions, but you can learn a lot.

Joe (01:04:57.198)
Yeah.

Joe (01:05:09.443)
Yeah.

Valentino Stoll (01:05:09.92)
Yeah, there's lots of great stuff in there. I always like seeing the controversial chatter. you know, sometimes I'll find, you know, I'm on one side and then quickly on the other by the end of the conversation. So yeah, it's a good thing. A lot to learn.

Joe (01:05:22.564)
Well, that's a good thing.

If you're in New York right now, come and see Valentino, our very own, give a talk at Artificial Ruby tonight.

Valentino Stoll (01:05:39.862)
Yep. Yeah, I made a Ruby gem called chaos to the rescue, where it uses method missing to patch itself in real time. It's a lot of fun. It's mostly fun. We'll see. I mean, maybe it will be a serious thing someday. But method missing claw, exactly. The ultimate loop. I finally got it the other day.

Thomas Witt (01:05:40.247)
Wow.

Joe (01:05:51.949)
I love it.

Thomas Witt (01:06:00.727)
Method Missing Claw

Joe (01:06:02.584)
Yeah, I like it. Yeah.

Thomas Witt (01:06:05.133)
Ha ha ha.

Valentino Stoll (01:06:09.15)
When I'm in an IRB session, if I type quit, it actually exits the program. Instead of saying, I don't know what quit is.

Joe (01:06:13.921)
nice.

Thomas Witt (01:06:15.448)
Nice.

Joe (01:06:18.134)
Yeah, was right, deaf quit.

Valentino Stoll (01:06:21.154)
All right. Well, thanks again, Thomas. And until next time, folks, happy hacking.

Joe (01:06:27.588)
See you everybody.

Want to modernize your Rails system?

Def Method helps teams modernize high-stakes Rails applications without disrupting their business.