The Latent Spark: Carmine Paolino on Ruby’s AI Reboot
with Carmine Paolino
Listen to this episode
About This Episode
In this episode of the Ruby AI Podcast, hosts Joe Leo and his co-host interview Carmine Paolino, the developer behind Ruby LLM. The discussion covers the significant strides and rapid adoption of Ruby LLM since its release, rooted in Paolino's philosophy of building simple, effective, and adaptable tools. The podcast delves into the nuances of upgrading Ruby LLM, its ever-expanding functionality, and the core principles driving its design. Paolino reflects on the personal motivations and community-driven contributions that have propelled the project to over 3.6 million downloads. Key topics include the philosophy of progressive disclosure, the challenges of multi-agent systems in AI, and innovative ways to manage contexts in LLMs. The episode also touches on improving Ruby’s concurrency handling using Async and Rectors, the future of AI app development in Ruby, and practical advice for developers leveraging AI in their applications.
00:00 Introduction and Guest Welcome
00:39 Depend Bot Upgrade Concerns
01:22 Ruby LLM's Success and Philosophy
05:03 Progressive Disclosure and Model Registry
Full Transcript
Valentino Stoll (00:02) Hey everybody, welcome to another episode of the Ruby AI podcast. ⁓ Joe and I are here today interviewing a very special guest. ⁓ You may have heard of him from ⁓ such popular repositories as the Ruby LLM project. ⁓ Joe, you want to say hello? Joe Leo (00:23) Yeah, ⁓ I'm Joe Leo, ⁓ co-host of the Ruby AI podcast. I'm very happy to be here with Carmine Paolino. Hey, Carmine, welcome. Carmine Paolino (00:33) Hey guys, thank you so much for having me. This is a pleasure. Joe Leo (00:36) It really is. And this is going to be a kind of a special episode because, um, you know, we use for Phoenix, we use Ruby LLM every day. And so I have questions prepared from my engineers who are, you know, who are excited that I'm interviewing you today, but I have a question first, and that is why is the Dependabot upgrade from Ruby LLM 182 to 190 sitting in my pull request. sitting in the pull request bin for ⁓ GitHub, why are my developers afraid to upgrade right now? Is there any reason or can they upgrade with Abandon? Carmine Paolino (01:14) They can, sure. There's actually a new release today, which is 191. Joe Leo (01:16) Hahaha! ⁓ see, this is already obsolete. That's good, I'm going to tell them that. Carmine Paolino (01:24) What's your dependable config? It's a weekly, I guess. It should be daily. Joe Leo (01:29) Daily, yeah, that's good. You have, and that's probably a good a place to start. mean, over the last six months, you've, think one, one, ⁓ you'll have to correct me if my timing's off here. I one, was released about six months ago. Is that right? Carmine Paolino (01:47) It was March 11, actually. So a little bit more than six months. Joe Leo (01:51) March, okay. A little more. so, and since then we've had, you know, many minor point, minor releases and tiny releases and we're now up to 191. And along that same timeline, ⁓ the library has grown massively in popularity. And so I'm curious to get your take. What do you think accounts for its success and what... ⁓ What is the commensurate amount of updating and ⁓ kind of love that you've poured into the library ⁓ mean over the past, let's say, six plus months? Carmine Paolino (02:26) Right. I think the success came from the very beginning. So I didn't expect any of this, the way. ⁓ I thought this was mostly a project for myself. I built it because I wanted that kind of interface in order ⁓ to build my company, which is Shadwood Work. And four days later, someone posted it on Acre News, and it became the number one in Acre News. so from a few hundred. stars that I got from me posting it on Reddit, it became 1,700 stars in like not even a week. So it definitely skyrocketed at the very beginning. And then it became a normal steady ⁓ increase in terms of popularity. think when you do things for yourself and you're really passionate about it and you're really gathering to your own needs, there may be somebody else out there that may feel the same. So I might... Joe Leo (02:56) Mm-hmm. Carmine Paolino (03:21) true believer of building things for yourself and dog fooding. And I think that has resonated with a lot of people. So I built it for myself and a lot of people agreed that that's good way to do it, I guess. Joe Leo (03:34) Yeah, it's a lot of somebodies that have had that felt like, I need this thing too, you a whole lot. ⁓ Just checked right before we aired, ⁓ Ruby LLM has over 3.6 million downloads. ⁓ So you really are, leading something that is momentous in the Ruby community and really in the development community writ large. Valentino Stoll (03:34) You Carmine Paolino (03:36) you Right, right. Well, I think that ⁓ it's probably the philosophy that I poured into it that actually became the reason why people really communicated with it and really connected with it. So I believe that... ⁓ sorry. Valentino Stoll (04:06) Yeah, let's dig into that because before I let you get off onto a tangent, ⁓ I'm really curious, what is the reason, what was missing from all the other gems? I know you said you liked a very particular interface. What continues to drive that? ⁓ Carmine Paolino (04:14) Hahaha Valentino Stoll (04:34) that interface based on all of the things that are changing with all these models that like you see continuing to like grow the Ruby LLM project. Here you go, for you. Carmine Paolino (04:44) Right. So yeah, the philosophy is that I think simple things should be simple and complex things should be possible. Also, ⁓ models and providers, they are commodities, so they should be able to be changed anytime. We have new models coming out every three days, and so we need to be able to change them really quickly. And we don't want to have yet another way of doing things because these providers, they have their own APIs. They want to kind of lock you in their own APIs. So I'm kind of against that. ⁓ Then also convention over configuration, which we all know and love. Progressive disclosure was another like tenet of my philosophy. And also, look, I'm a solopreneur. So I need to be able to build the companies that I want to build just by myself. And I don't want to spend a ton of money into the cloud. So I want to be able to make this into one machine. and have like thousands of connections at the same time. So for me, the last, maybe most important tenants of this philosophy is like one API for one person in one machine. So these are the core tenants. Joe Leo (05:58) That's interesting. Can you talk a little bit about progressive disclosure? Because of the things that you mentioned, I think that might be the least familiar, certainly to me, maybe to others. Carmine Paolino (06:07) Right. So when you use Ruby LLM and you do rubyllm.chat.ask, then you have all these dot ⁓ with methods. That's what I mean by progressive disclosure. So first you can start with rubyllm.chat.ask and not even specifying a specific model. ⁓ It will choose the default model. Then you can add the model name. Then maybe you want to use another provider. ⁓ Maybe you want to add some parameters that are specific for that provider. Maybe you want to add a system prompt. So all these things you can add on top of the original call. And that makes it so much easier to understand what's going on instead of having to configure everything from scratch and be this massive amount of code or hashes that are really ugly. Valentino Stoll (06:40) you you Joe Leo (06:55) Yeah, and that definitely, you know, a huge advantage of Ruby LLM is the way that it abstracts that messiness away from the developer. But I am also curious for you, as you mentioned, a new model is released every three days. Some portion of the 3.6 million people that are using Ruby LLM want to use that model. So is your job just continuously, you know, configuring, you know, the interface for a new model behind the scenes? Carmine Paolino (07:23) Fortunately not, because right now we are already... Yes, I'm fine. Thanks for asking. ⁓ So we have already 11 providers that we support. And out of these providers, some of them, support hundreds and hundreds of models like OpenRouter, for example, or even the local providers like GPU Stack has ⁓ Hug-In Phase and another model repository. So they... Joe Leo (07:24) Good. Hahaha! Valentino Stoll (07:29) You Joe Leo (07:35) Mm-hmm. Yeah. Carmine Paolino (07:51) In theory, we support like thousands of models. So I don't need to do work for each single model, fortunately. The thing that changes all the time is the model registry, but there is an easy way to actually refresh it. And now we support actually saving the new refreshed model registry in a different file. This is a new thing for 1.8. So hello, Def Methods ⁓ engineers. Joe Leo (07:54) Mm-hmm. We're on 1.8, but we need the new 1.9 features and goodies. it is. I was new for 1.9. Then yeah, all right. We'll get on it today. Carmine Paolino (08:22) Right. Sorry, 1.9. Yeah. Yeah. Then also we support the model-backed ⁓ model registry, which was from, believe, 1.8. And so you can actually save all of these new models that come in. I have to say the model registry was one of the first things that I wanted to have in the library that I wanted to have. and kind of the fundamental reason why I made all this. ⁓ So you can see it by, for example, having rubyllm.chat without even specifying a model. ⁓ Or just by specifying a model, it will actually know which provider it is. And so it would give you the access to that provider instead of you have to remember which model is from which provider. Valentino Stoll (09:13) Yeah, that was ⁓ one thing I really liked about OpenRouter. ⁓ It was one of the first, I think, to have this provider concept. And it's funny because I tried to ⁓ work with Andre on the Lang chain RB project to get a provider domain set up. It's hard when you already have an established gem to do some of core reworking like this. ⁓ So I think you... Carmine Paolino (09:17) You Joe Leo (09:38) Mm-hmm. Carmine Paolino (09:38) Yeah. Valentino Stoll (09:41) It had some advantage there in that you already knew what was missing. I knew what to build first. ⁓ I'm curious, ⁓ how does the provider mechanisms work? Is it easy or straightforward to set up this kind of setup? What challenges have you kind of experienced trying to manage all these different APIs? Carmine Paolino (09:44) Yeah. It's not super challenging. think right now it may become a little bit more challenging with the responses API because it's a completely different concept. I believe, correct me if I'm wrong, I haven't looked into that deeply, but they store the entire conversations themselves. This is fundamentally different from how we do things in RubyLM. So I had the concept of we store the conversation ourselves and then we send it all at the same time. It's not that hard to just send the last message, though, so also not that big of a problem. ⁓ I think the ⁓ most challenging thing is actually to support every little thing that the providers add that is slightly different. And maybe the most challenging thing of all was to actually implement Anthropic prompt caching. So... ⁓ Joe Leo (11:01) Hmm. Carmine Paolino (11:03) Every other provider kind of does prompt caching by themselves automatically. You don't even have to set it up. It's great. Anthropic decided that you should have the full control about which messages they get cached. And they have a whole book of a page that you can read about when, happens what. It's crazy. And I didn't want to have a whole set of hooks and... Valentino Stoll (11:15) you Carmine Paolino (11:30) methods and things just for that specific quirk. So it was quite the challenge. There's been a few PRs that wanted to address that. ⁓ And they fundamentally changed a lot of things in the library. So I was like, I'm not so sure about this. What if they decided this is a wrong idea at some point and they changed it? And then we're stuck with all this mess of code to rewrite again. ⁓ So I decided to go. Joe Leo (11:42) Hmm. Valentino Stoll (11:50) Right. Joe Leo (11:51) Mm-hmm. Right. Carmine Paolino (11:58) a little bit deeper and basically give the users the possibility to ⁓ attach raw content. Because essentially what in order to do, correct me if I'm jumping too deep into details here, but basically for Antropic Prom Caching, you need to specify certain things like the cache control, ephemeral, and then also for how long you want it. inside the block of the content. So you have type text and then the content and you have to specify in that same block. So I had to give access to the users to that block and so I invented the raw content blocks, is, I believe it was 1.9 also. Joe Leo (12:44) I see. All right. So still, you know, so that's not, that's not been in production for very long. So probably, you know, probably jury's still out on how, how effective it'll be. But it's what I'm hearing you say is that you're trying to make the, the sort of the design, the design decision that makes the smallest amount of impact to the overall library. ⁓ because you're really just supporting one provider, one very important provider, but one, one provider nonetheless. Carmine Paolino (13:00) That's right. That's right. Yeah. So in general, I think that's the challenge is to not completely like mess up the whole library just for little quirks of little providers. And so sometimes you need to, you know, sit for a little while with the ideas and think deeply. And then something comes up that maybe, maybe better. Valentino Stoll (13:39) Yeah, you make kind of a great point in that, you know, Anthropic even has, you know, their whole like access to thinking, right? Like where you can add thinking blocks, right? Like, ⁓ so I guess that's where I wonder about the complexity within like Ruby LLM, managing all the APIs, right? Like the responses API comes with a whole bunch of new things that you can't do in other model providers. ⁓ You know, how are you looking at this landscape of like, you know, now, now that it's beyond just like everybody uses open AIs, APIs, right? Now that the use cases are getting more advanced, like what are your plans like longer term for like supporting these diverging APIs, right? Like. Carmine Paolino (14:24) Right. Joe Leo (14:24) Hmm. Carmine Paolino (14:25) I I look at the common denominator. So for example, thinking is going to be something that we're going to look at pretty soon because a lot of providers now have thinking and to support it in a very straightforward, clean manner in a way that I would actually enjoy using. ⁓ And then we essentially have two different layers in Ruby LLM. So we have the plain old Ruby objects of Ruby LLM. ⁓ in which you have all your representation of the messages, the contents of the messages, the attachments, all of that. And it's in one single format, right? It's one API. ⁓ And then you have the provider translation layer, right? So the provider translation layer parses the provider responses and also renders the provider responses, right? So... Joe Leo (15:18) Mm-hmm. Carmine Paolino (15:19) In order to support all these APIs, I basically, we need to think, what is the best way to interact with that specific concept and then implement it inside the Ruby LLM, pure Ruby LLM part of it. And then in the provider, like translation layer, then we implement how to do that. Simple as that. Joe Leo (15:46) Look at the wheels turning for Valentino. Valentino Stoll (15:47) Ha ha. Carmine Paolino (15:47) Yeah. Valentino Stoll (15:51) Yeah, I guess there's a lot of unknown usage, I think, ⁓ that I really admire ⁓ your, you and ⁓ Alex Rudol from the Ruby OpenAI gem have a very similar philosophy in just waiting things out. And ⁓ it's very impressive, especially for... Joe Leo (16:12) Mm-hmm. Carmine Paolino (16:12) Yeah. Valentino Stoll (16:16) the popularity that your gem has gotten here. ⁓ Being able to wait probably gets harder and harder. More issues, more pull requests, more people yelling at you on various channels, right? Why are you not solving this already in this particular way? ⁓ So ⁓ I know what, yeah, right? It's fine. ⁓ You know, it's not easy though, I imagine. So I mean, Joe Leo (16:29) Right. Carmine Paolino (16:30) Thanks for watching! Joe Leo (16:33) Mm-hmm. Carmine Paolino (16:35) That's fine. I can say no, it's fine. No, it's not easy, that's true. Valentino Stoll (16:46) Where do you see like, you know, the, turning point, ⁓ what, is, like, is it pretty natural at this point where you see, okay, like this now makes sense to implement. Well, like where, you know, at what point, ⁓ are, do you see like, okay, let the flood gates open and just like, let, let people implement this stuff. Or, ⁓ do you try and take hold of that design and decide, you know, Carmine Paolino (16:59) Hmm. Right. Yeah, I'm not sure if you guys are going to be satisfied with this answer, but it's gut feeling. So at some point it just clicks and it feels right. You know, ⁓ there's no process or analysis or Excel spreadsheets or, you know, decision tree that I make in order to ⁓ implement something. It's just, that's a feel right. And ⁓ maybe we can rationalize this by saying, Valentino Stoll (17:17) You Joe Leo (17:18) Mm-hmm. I'm satisfied. ⁓ It's honest, you know. Carmine Paolino (17:42) It feels right when the API will feel right, will feel like I actually want to use this. And also the complexity is not overblown for something that should really be simple. So as I said, the first tenet of the philosophy is like simple things should be simple and complex things should be possible. So that also means that sometimes you just give escape patches to people and be like, Hey, this part you can control, but warning. Valentino Stoll (17:47) you Carmine Paolino (18:12) We, you know, you have to know what you're doing. Valentino Stoll (18:15) So how much of it at this point is you leading that design or getting influenced by others? Right? Like, do you see more of it becoming external ⁓ and just like things starting to look right based on what other people are contributing? ⁓ Or is it coming, you know, from just your experience, like watching stuff grow? Do you, do you see like one increasing over the other at this point? Joe Leo (18:15) Mm-hmm. Carmine Paolino (18:43) I'm not sure about the future. I'm kind of against predicting the future, so I don't know what the future is going to be. Valentino Stoll (18:50) You lean on the LLMs for the next step. Joe Leo (18:51) Yeah Carmine Paolino (18:56) That would be an even worse prediction. ⁓ No, so at the moment, I'm kind of leading the design of things. I certainly get influenced by what people do and what people think, as we all do, right? Like we're surrounded by our own environment and it's kind of like reinforcement learning, right? So we always get ⁓ influenced by whatever people are saying, whatever people are doing. But ⁓ I still feel like... I want to have the direction of at least the designer things for the foreseeable future. ⁓ Ultimately, this is kind of like my taste implemented inside code. And yeah, it worked out so far. So let's see. It may change in the future. We will see how that goes. Joe Leo (19:45) That's true. Valentino Stoll (19:50) That's fair. Carmine Paolino (19:51) you Joe Leo (19:54) I, ⁓ I wanted to bring in a couple of questions from, from the development team. that all right with you? All right. So this one I think is relevant based on what we were just talking about. So this is from Steve. ⁓ so there's been a lot of discussion about multi-agent AI assisted development lately. And, he notes that swarm recently released version two of their SDK and it's built on top of Ruby LLM. And so the question for you is what What challenges and opportunities do you see with this new development paradigm? Carmine Paolino (20:27) Mm-hmm. So I'm not a huge believer in multi-agent systems. You can clip that if you want. ⁓ Joe Leo (20:37) Okay. This is, this is going to be clipped and put into on my LinkedIn tomorrow. So please continue. All right. Yeah, even better. Valentino Stoll (20:42) I'm with you too. Carmine Paolino (20:45) there you go. Perfect. So I've been in AI for now a long time. It's been 14 years, if you count also my studies. ⁓ And the more you add models on top of other models, the more you add errors. So I think that multi-agent systems, at least maybe I also haven't used them enough. That's also true. Joe Leo (20:53) Hmm. Carmine Paolino (21:15) I cannot believe that they're going to be extremely good at their output. So already, I'm using Codex, right? Every day. And I need to really baby it in order to get the kind of output I want. Even just a normal chat with ChatGPT or Claude or whatever ⁓ needs to be babied a lot in order to get the output you want. So I think when you add a lot of agents working ⁓ in parallel, Joe Leo (21:29) Mm-hmm. Carmine Paolino (21:44) ⁓ you have a little bit more accuracy because each agent is doing a specific task, but overall, you're using a lot of them. So your surface area increases. And I'm kind of skeptical that that is a viable thing right now. It may be like a great thing in the future. And it's great that people are already exploring that and doing the best out of it. I know that... Joe Leo (21:58) That's right. Carmine Paolino (22:12) Kieran Klassen from Every, for example, is a big fan of that. ⁓ But yeah, at the moment, I don't see this being a really good way of working. What are your thoughts on this, guys? Joe Leo (22:26) Alright. Go ahead, Vy. Valentino Stoll (22:30) ⁓ It's difficult to me because I both use it and ⁓ fight against it at the same time. So I guess I both see the benefits and the cons simultaneously, which I think is good. And I think I'm not alone in this. ⁓ But yeah, I I think breaking things down into distinct, agentic units that do very specific things, ⁓ it's not manageable. And I commonly liken this to like, ⁓ you know. So the whole flat organization mentality that DHH originally started charging for, ⁓ where you get a manager of one and you'll be much more effective, especially if you can multiply that across your smaller organizations. And that problem then becomes just a scalability, like how does that scale? But as far as development goes, you are just one. Carmine Paolino (23:29) Mm-hmm. Valentino Stoll (23:46) If you work within a team, like even at a larger organization, you're probably like 10 developers at the most if it's structured right. And so even 10 is like manageable. ⁓ so, but as like, as it grows, it becomes less manageable, right? And I think that's even, ⁓ even more so the truth with LLMs as like, you know, contributors, right? And so the more that you break things apart and distribute it, like the harder that becomes to manage, and then you're just managing the distribution, right? And then like, how do you worry about quality? Like as things grow too, right? And then, and then at what point are you just like spending all your time managing all this stuff? which, know, maybe people want to do that versus actual coding, but like personally I don't, right? ⁓ but who knows how that'll evolve, right? Maybe like. Carmine Paolino (24:33) Yeah, I think that's... Joe Leo (24:36) Bye. Valentino Stoll (24:42) People coming out of college now, they really don't want to program. That could be the case. I'm too far out of the educational domain to know that. So ⁓ if you're listening out there. Carmine Paolino (24:55) Maybe I don't understand that because we program in Ruby and I love Ruby. I think it's so good to program in Ruby. Like, why would you take it from us? Joe Leo (24:57) Yeah. Valentino Stoll (25:01) Right? Joe Leo (25:03) Yeah. Valentino Stoll (25:04) You know, I'm with you. so, you know, ⁓ unfortunately they don't teach Ruby, you know, in most colleges ⁓ yet. And so like, if they did, maybe people would have a different mentality. ⁓ But you know, how many people want to like learn TypeScript or Rust or things like that coming out of college? Like maybe they do, I don't know, but it seems like they would rather talk to an LLM. Joe Leo (25:26) Hmm. Valentino Stoll (25:30) based on just talking to people, right, new to the industry. ⁓ Carmine Paolino (25:34) Yeah, I'm not sure if it's so much about the language design as it is more like we're kind of lazy as people, right? ⁓ So I think that's fine that we are lazy, but ⁓ there is still a lot to ⁓ be done in the LLM space in order to have very good code generation that doesn't skyrocket into some pile of... Valentino Stoll (25:40) Right. Joe Leo (26:01) Hmm. Valentino Stoll (26:01) Yeah. Yeah. I mean, speaking to that specifically then, right? Like, that's one of my biggest cons of all of this. It's like, you know, you ask it to do something very complex. You even, give it a very like refined specification of all the details that you want from it. and it's going to go and like dump a ton of stuff that you then have to spend time to like look through and weed through and like how much of that, like if you had just designed upfront and like done your own rubber ducking and like. you know, done the, Gary Bernhardt style of development of like, you know, testing the things that you want to do upfront. So you can see where the interface leads, right? ⁓ all of that is lost at just like pure implementation, right? Like, boom, here you go. Boom. Right. And like, you have to like keep level setting as you go to, right. And be like, that's not what I want. That's not what I want. And then it's just like, it becomes this whole can of worms of, ⁓ you know, trying to course correct, which, you know, If you break it down smaller, maybe isn't a bad thing, right? ⁓ Joe Leo (27:03) Well, that that's interesting because both of you guys are, it sounds to me like both of you guys are describing, ⁓ agentic coding in, general, whether or not it's multi-agent. ⁓ and I'm curious to know, is that, you know, because Carmine, mentioned at the top, okay, well I'm using Codex every day and I really have to baby it and I get that. And we've had people on the show, you know, already multiple times that are like, yeah, you know, maybe, maybe excited, maybe frustrated, but are like, yeah, I'm doing this thing every day and I'm Codex or Claude code to do this or do that. I'm trying to get better at it. And, ⁓ but then what, where does the, where does the multi-agent thing come in? And I guess maybe for definitely for me, maybe for some people listening, how does that differ from, for example, just the normal tool calling that happens when I, whenever I ask, you know, Codex to do something for me. Carmine Paolino (27:57) Right, so in multi-agent, you would have multiple agents doing different things, and a single thing would be conducive then to the final output. Usually, you would have maybe a master agent. ⁓ For example, in deep research, you have a lead researcher. It gives out tasks to the smaller researchers, then they do specific things. ⁓ That works really well, to be honest. Deep research is great. ⁓ But at the same time, it's... Joe Leo (28:02) Mm-hmm. Mm-hmm. Mm-hmm. Carmine Paolino (28:26) you have to think about also the accuracy ⁓ of the total output. So this is, I think, what we were talking about. And yeah, it is similar ⁓ to just agentic development, but it kind of skyrockets from there, it feels to me, ⁓ where if there is one wrong information, then it just gets passed along. ⁓ It could be that you could use multi-agents to verify the output of things, and I think that's the right way of doing it. ⁓ If you have to do multi-agent, that's student teacher ⁓ or researcher verifier kind of ⁓ implementations. think that those are the best. ⁓ But how many times do you see that versus I just have a swarm of 100 agents, and they're doing all these things for me, and suddenly, like, okay. How do you even verify all that code, right? Like for me, it comes down to... How many times did you enjoy, ⁓ in your life to edit code or to read other people's code more than you enjoyed writing your own? This is what it down to me. Like I think writing code is so much more enjoyable, so much more fun than it is to like trying to figure out this crazy amount of complicated things that are interlocking between themselves that somebody else, you know, did unless. Valentino Stoll (29:32) you Carmine Paolino (29:57) It's written in Ruby and it's really beautiful. Then I really enjoy it. Valentino Stoll (30:03) Yeah, I'm curious like as So you mentioned deep research being good for multi-agents and so it makes me think like maybe there are specific tasks that are ⁓ good for doing multi-agent approaches ⁓ and so I'm curious like where what else do you see besides research as like a multi-agent approach being beneficial and like what tasks have you seen it fall apart where a single agent approach is more beneficial? Carmine Paolino (30:31) Right. I think it's all about context management, which is perhaps the most important thing that we should take care into developing an agent. ⁓ So by looking at how Cloud Code and Codex work, I was actually inspired to change how Chudwood work works because they do context management really beautifully. So you have first the grep tool, the search tool that would search things in files, and then you will only get that specific line, right? So you can search in hundreds of files and get hundreds of specific previews of these files. And then the LLM would decide which file to go and look at. So I think with multi-agent systems, what you can do ⁓ is to spread across like a bigger task of researching in the entire web. And then each single agent would look at one specific aspect of it, summarize it, and then give that summary back to the to the lead researcher or to another step in between, who knows? ⁓ And at that point, you would have much more context efficient ⁓ usage. And I don't know if you, are you guys familiar with context rot? Yeah. It's a big thing. Valentino Stoll (31:55) Oh yeah. I was just listening to Lance Martin on the late in space podcast. He was talking about that. Yeah. Do you want to just like refresh the audience on what that is? Carmine Paolino (32:06) Yeah. So basically, even though you have models that have 1 million tokens of context, it doesn't mean that they will perform just as well if they would only use 100k tokens of it. Actually, you can see a pretty steep fall off after I think it was even a thousand tokens, even for the bigger models, because they start to... catastrophically forget things, they start to have poorer performance in just the quality of the answers that they have. I don't know if you guys experienced it also, if you, for example, code for a long time in the same session in Cloud Code or Codex, at some point, what used to be like a fantastic code kind of sidekick becomes this really messy unwieldy agent that is not doing what you asked for. Valentino Stoll (32:38) . Joe Leo (33:06) Yeah, but I used to have the same experience with junior programmers when I was pairing with them all day. Valentino Stoll (33:06) Yeah. Carmine Paolino (33:14) Fair enough, they just overwhelmed. We should have a vacation for the LLMs. ⁓ Joe Leo (33:16) Exactly. Valentino Stoll (33:21) It's really funny. Yeah, so I'm curious on the topic of context engineering, which has become kind of like the new thing, where do you see that growing within the Ruby LLM gem, right? Are you looking for opportunities to try and introduce some design concepts around context management, or are you kind of not even considering that and just let people figure it out like everybody else? Carmine Paolino (33:50) Not at the moment, except for one thing, which is when we will introduce thinking, I'm already thinking, intended to remove all of the thinking blocks, which I believe is what Codex does. ⁓ So you see that it's quite context efficient. It doesn't go up as much as Cloud Code does. ⁓ But apart from that, I don't see... I mean, we could implement a lot of other things like... ⁓ compacting of the context. But at that point, we're essentially having a library of prompts. And I'm not sure if your LLM library should prescribe how these prompts should be. I think it should be your own kind of creativity and ingenuity in order to make these prompts as best as you can. Valentino Stoll (34:42) So that leads me to my next question around like, how are you testing? How are you benchmarking the library? is there any room there for like something new? Because I feel like that ⁓ one thing we just had Vincent on from dspyrb. And one thing I really like about that is it's like very like evaluation first. ⁓ And I feel like that a lot of that is missed. ⁓ We were at this kind of inflection point of creativity, like hitting its limits. And ⁓ we're kind of finding the tasks that the things are doing well at. And so how are you testing that it's doing the tasks that you ask it to specifically do? ⁓ Carmine Paolino (35:31) Yeah, evals are a big thing. They're going to become a big thing when I will release the eval framework that I'm thinking about. ⁓ So chat with work is now at a point in which the UI works really well, the tools work really well, ⁓ and I want to actually have a private alpha out with a few selected people. ⁓ Joe Leo (35:38) Hehe. Carmine Paolino (35:56) But then after that, we really need to make sure that the responses that they get from their questions, they're actually really good. ⁓ I know that Kieran has released Leva, ⁓ but ⁓ what I want to have is a framework that essentially, first of all, makes the concept of an agent in a class. So you already have progressive disclosure in RubyLM, right? Which you can then compact into an agent by one method. So... You have RubyLM.Chat.Ask with a system prompt and with tools and all of that. And you can call that an agent and put in a method. I think it's nicer to have it in a class. It also makes people think more that, this is an agent. You can do agents with RubyLM, which I think some people don't realize. And then you can use these agents, which are essentially a collection of tools and, know, Joe Leo (36:44) Mm. Valentino Stoll (36:44) You Carmine Paolino (36:53) instructions ⁓ in the eval framework. So this is kind of how I'm thinking about it. So there's a two step ⁓ thing. I think the agent class is going to be very, easy to implement ⁓ and the agent framework probably not, but we'll see. So this is how I'm going to benchmark it. But if you think about benchmarking it in terms of the quality of the library, or where does it go next? Because I think there was also that part of the question. ⁓ Yeah, it's based on what I like. ⁓ No, but to give you a better answer is essentially a combination of ⁓ Joe Leo (37:13) So. Valentino Stoll (37:24) Yeah. You Ha ha. Carmine Paolino (37:41) So I don't want to make Ruby LLM like a framework for a specific type of development in LLMs. I want it to be easy to use layer to communicate with LLMs in a way that it's at the right complexity level, right? It's at the right abstraction level. So it's not too shallow, like, for example, you know, the OpenAI SDK or... Joe Leo (37:53) Mm-hmm. Carmine Paolino (38:11) Alex's ⁓ Ruby OpenAI, which is like, it's a layer on top of the APIs or it's too complicated, like a multi-agent framework would be, right? I want to be kind of the in-between in which like you can go, you can simply send ⁓ like a message to an LLM, but you can also do something a lot more complicated in a very easy way. Again, it's like the simple should be simple, complex should be possible kind of way. So ⁓ in my talk at Euruco and in the upcoming keynotes at SF Ruby, I have a slide in which in 20 lines, like I can fit it in a slide, there is a four-agent multi-agent system in RubyLM for... I don't remember what it was. I think it was some summarization of something. ⁓ Like, I think it was, look at this code and give me ⁓ like a security review and a code quality review and all that. It was 20 lines of code. So if you can do that, ⁓ I think I'm happy. Joe Leo (39:23) Yeah, that's first, that's very exciting for the upcoming ⁓ SF Ruby. And also I think that it's the right ⁓ philosophy for Ruby LLM to stay generic and be a library that can be used in many different ways. I still think that there probably is an opportunity there for exactly what you're talking about at the top, which was a library for context. But I don't actually think that that should be Ruby LLM's job. I think that's probably another you know, another job. And I think dspy kind of gets at it as a framework for building it. But I also think that we're all of us developers are, you know, are and probably should not be creative writers right now. And, ⁓ and I think the sooner somebody comes along and says, no, no, this is actually the best way these are the, these are the words you use when you're doing, you know, some generic task, the better it will be for everybody. we get out of the creative writing game and back to, know, Valentino Stoll (40:08) He he he. Joe Leo (40:22) really closer to the programming. Carmine Paolino (40:25) Yeah, I think this is the struggle of a developer approaching machine learning. So I don't know if you guys use classical machine learning ever, when I switched from essentially, in 2009, I learned Rails because I wanted to in university, during my bachelor. And then a couple of years after, I was doing my master in AI and I was using Python. I was learning all these other things about machine learning. Joe Leo (40:31) Mm-hmm. Hmm Carmine Paolino (40:54) it was such a shock from going from the deterministic way of thinking of a developer, like, if I do this, then this happens, to like, I train a model and who knows what happens. And we'll have to do all these vague testing and ⁓ you have these metrics that don't really match the business goals and my God, how do you combine them together? So I think this is a ⁓ different version of Joe Leo (41:06) Right, yeah. Valentino Stoll (41:07) Hahaha Carmine Paolino (41:24) that kind of feeling that we're having right now. But now we're interacting instead of with a model that essentially you give some numbers and it spits some numbers out. ⁓ It feels like it's a whole person that you're talking to and it has a personality and a style and a way of thinking. it feels a little bit different. ⁓ I think we're never going to have like a deterministic way of doing that. And Joe Leo (41:39) Hmm. Carmine Paolino (41:50) Each single model has its own style and personality. I don't know if you guys can see the quotes. And we should kind of approach it that way. I think by, of course, we all want to have like, oh, what is the best way of doing it? But I don't know if we're going to find it. You you have to have it per model and do a lot of testing. And that's just, that sucks. Joe Leo (41:56) Yeah, yeah. Valentino Stoll (41:56) Hahaha Yeah, it's funny, know, that the training data is so important for tasks, task selection, right? Yeah. So ⁓ I feel like, yeah, I know. And you mentioned, you know, traditional machine learning and like, you know, data quality is like the most important thing you could possibly imagine. And so, you know, if you're looking at an open source model, like that's all you're looking at. It's like what data was used to train this thing, ⁓ right? ⁓ Joe Leo (42:21) Yeah, it's a point. Carmine Paolino (42:25) It's funny you that. The single most important thing. Joe Leo (42:28) Hahaha! ⁓ Mm-hmm. Valentino Stoll (42:50) I feel like that was an unlock for me realizing that just there's so much quality data on a hugging face that you can just download, you know, that like is available. And I feel like once people realize like, just like that there's quality data out there and that you can use it even with like these less deterministic models to help course correct. feel like, you know, there's a lot of open like holes to fill. Carmine Paolino (42:58) Hmm. Valentino Stoll (43:20) in that kind of like crossover. Now that machine learning people are kind of like, I won't say losing jobs, but like transitioning to the LLM world. I feel like there's a lot of missing gaps of, how do we use those traditional methods to improve the the determinism aspect, right? Of these specific tasks. ⁓ But I wanted to... Carmine Paolino (43:31) Mm-hmm. Yeah, I've been in machine learning enough to say that there's no way to make it completely deterministic. ⁓ You kind of have to embrace the mess. Absolutely. Valentino Stoll (43:47) Oh, no way. Right. But you can greatly improve the prediction accuracy, Joe Leo (43:48) No. Valentino Stoll (43:57) right? Just by percentages, order of magnitudes, you can course correct for some tasks, especially if you have the data. Yeah. Carmine Paolino (44:04) Yeah. And the key is really evaluation, right? So I think this is also why evaluation is then clear next step for also RubyLM. ⁓ Even in traditional machine learning, you would do so many tests before putting like a model in production. You would have, ⁓ you know, entire, there's an entire ⁓ field in business, you know, of like weights and biases and ⁓ ML flow and all of these frameworks to just do experiment tracking. Valentino Stoll (44:18) Yeah. Joe Leo (44:18) Hehe. Valentino Stoll (44:33) Right, right. Carmine Paolino (44:33) And the reason is that you do a lot of experiments. The problem now is that, well, we can't train these models anymore because they cost millions of dollars to be trained. So what can you do? Well, you can do a little bit of fine tuning if you have a lot of data and you really want to have a specific style of responding. ⁓ But is it going to be ⁓ game changing for you? Not sure. And also, ⁓ it's Valentino Stoll (44:37) Yeah. Carmine Paolino (45:04) It should not be used for learning new concepts necessarily. ⁓ Instead, it's all about context engineering. So these are the two tunables that you have. And unfortunately or fortunately, ⁓ probably unfortunately, all these models, they are different. So for each model, you need to have a slightly different prompt ⁓ that would work just as well as the other ones. ⁓ If you have a certain benchmark that you need to hit in terms of precision, then you need to have different prompts and different tools and different description of these tools and all the things that you pass to the model needs to be a little bit different in order to maximize their usage. Valentino Stoll (45:49) Yeah, totally. ⁓ That's another thing you can see on Hugging Face too is you explore a model and it tells you how to use it. It's like their model card, ⁓ which I feel like people just don't even look at anymore because they're either using Anthropic or they're using OpenAI ⁓ for the most part. ⁓ But ⁓ yeah, mean, there's other options out there. And so I think I'm excited to see the smaller models take over as an aggregate. ⁓ Joe Leo (45:58) Hmm. Carmine Paolino (45:58) You Yeah, yeah. Valentino Stoll (46:19) I don't know if we'll see that anytime soon. ⁓ I am super interested in this topic, but I want to leave time for ⁓ your whole, ⁓ I guess, bit on ⁓ why async in Ruby is just like a fantastic leveraging point ⁓ for your library, but more importantly, for Ruby's position in AI. ⁓ Because I feel like you were so spot on ⁓ in your article on... Carmine Paolino (46:35) Ha ha ha. Joe Leo (46:47) Hehe. Valentino Stoll (46:48) the future of AI apps with Ruby. And so what's like the high level, like 10,000 foot view of like where you kind of see like, cause you mentioned the async framework ⁓ as being a great positional point. with, know, with tender loves, you know, Aaron Patterson's talk recently, you know, he was talking about Ractors providing like that full parallelism, ⁓ you know. Where do you see this kind of growth opportunity and like what should people be focusing on when they're using their AI systems? Carmine Paolino (47:23) Right, so 10,000 foot view is if you're doing IO bound operations, you should use async in Ruby. If you're doing CPU bound operation, you should use rectors. ⁓ You can use threads too, that's fine, ⁓ but they come with overhead. So why would you do that? That's it, that's pretty much my bit. Valentino Stoll (47:46) Right. That's it. ⁓ Carmine Paolino (47:51) Thank you. No, so I can give you a little bit more context. So threads are really used a lot in Ruby. ⁓ And the background job processors, they use a concept of the maximum amount of workers. And why do they do that? Because threads also consume a little bit more memory. consume a little bit more RAM, especially like virtual memory, actually a lot more virtual memory. I think it's eight megabytes per thread of virtual memory and it's 32 kilobytes or four kilobytes. ⁓ It's in my article somewhere. Of memory. So substantially different in RAM is a lot different. Like you don't consume that much RAM. So it's kind of equal. That's fine. ⁓ But the context switching is a lot. Valentino Stoll (48:31) Hahaha Carmine Paolino (48:46) I think it's 20 times more expensive. long story short, you need to use Mac threads, otherwise you're going to blow up the virtual memory budget, or you're going to have way too many context switches, and then your machine slows down to a crawl. ⁓ With Async, you run everything in a single thread. And essentially, ⁓ it's collaborative... Sorry, it's cooperative... ⁓ concurrency, which means that each single fiber would have to yield to the other fibers, but they yield automatically at IO operations. So you don't have to do kind of anything, ⁓ unless you're doing something really specific. So what it means is that it's possible now to create, in fact, ⁓ Samuel Williams, which is the creator of the whole async ecosystem, has created also a active job ⁓ adapter ⁓ for async job. And now you don't need to have a maximum number of fibers in there, which means that you can run a lot more concurrent LLM or any IO bound operations at the same time. So that's where the one machine part of the one API, one person, one machine comes in. You can run thousands and thousands of concurrent operations and LLMs conversations at the same time, because otherwise you will block one single worker. for each single LLM conversation that you're having. And these LLM conversations that you're having, they can take, you know, 60 seconds, five minutes, God knows. Like if you put the thinking budget too high, probably, I don't know, 10 minutes, 20 minutes, probably will time out. So take care of your timeouts. But then you're having one slot occupied for that specific conversation, and then you have one less slot. Joe Leo (50:14) Right. Yeah. Carmine Paolino (50:44) And you shouldn't have like an enormous amount of threads, even in solid queue or in all of these other wonderful, also like not LLM optimized background job processors. Valentino Stoll (51:00) Yeah, that makes a ton of sense. ⁓ I guess, how do you think about the scale of things? ⁓ Do you find yourself needing to group certain tasks that you know take longer than others in certain cues? Do you do any of that kind of planning, or do you just throw things in the active job and let it figure itself out? Carmine Paolino (51:20) No, there's no prioritization or anything. At the moment, there's one single queue in Chats with Work. And the reason is that ⁓ I moved away from another way of doing this. If you don't know Chats with Work, maybe your audience doesn't. It's basically an agent that... communicates with your Google Drive and Notion and Slack and GitHub and stuff. For the moment, it's only Google Drive. ⁓ And in the past, I used to be like a normal RAG system. So I would synchronize all your files and index them and have embeddings and vector search and all that. And that required two queues because you have the queue of fetching stuff and the queue of the conversations. Joe Leo (51:52) He Carmine Paolino (52:09) But right now is actually much more similar to cloud code. That's why I call it cloud code for your documents because it's essentially having like a grep and a search tool and a read tool and an ⁓ update tool. And so ⁓ it doesn't need to like fetch your whole data set, you know, in the background. So it doesn't really clog my, my cues too much or at all. So yeah, this is, this is specific maybe to my use case, but ⁓ Valentino Stoll (52:34) Nice. Carmine Paolino (52:39) You should probably use them. If you have different ⁓ use cases, I guess that's the answer. Valentino Stoll (52:39) From what I've seen, yeah. You know, it's funny Joe Leo (52:44) Mm-hmm. Yeah. Valentino Stoll (52:45) you mentioned that ⁓ read update, kind of the crud actions of Claude code style. I feel like I've seen that growing in the Ruby discord channel at least. That style of, ⁓ you know, I won't even call it agentic coding, but. ⁓ Carmine Paolino (52:54) You Joe Leo (53:01) But it's style, it's doing CRUD updates or doing CRUD with AI. ⁓ Carmine Paolino (53:01) It's too low to be gone. Valentino Stoll (53:03) Right. Yeah, pretty much. Yeah, pretty much just like, you know, giving it a Rails app to use. ⁓ You know. Carmine Paolino (53:07) you Joe Leo (53:14) Right. Carmine Paolino (53:16) Yeah, turns out we're all crud monkeys, even LLMs. Valentino Stoll (53:20) Right, even LLMs, Joe Leo (53:20) yeah. Valentino Stoll (53:21) yeah. Joe Leo (53:22) Yeah. ⁓ What is something that ⁓ is another question from one of my developers on the on the topic of chat with work, where have you found Ruby LLM to be, you know, specifically, you know, very good at solving technical tasks and where have you seen it actually, you know, maybe come up short? Carmine Paolino (53:43) Well, it's been developed within chat with work. it's pretty good at what it does. It's kind of tailor made. What's very good at, think it's, I don't know, the thing that I enjoy the most ⁓ is the Rails integration, just because it has the exact same API ⁓ specification as the, you know, Joe Leo (53:48) So it's always perfect for the task. Yeah, that's good. That's an advantage. Valentino Stoll (53:50) you Carmine Paolino (54:10) plain old Ruby objects kind of API. So I can use, I don't need to like, you know, think so much when I'm coding, you know, a new prompt or I have another agent or something like that. It's very easy. It's just like chat.create and then you have the same exact parameters that you can pass to rubylem.chat. So, and then .ask is the same. So that part I enjoy a lot. Just the... Decluttering my brain from bullshit. That's the thing that I like a lot. Because I think we need to focus more on the context engineering and prompt engineering and all of these tasks which are really important to make our products better instead of thinking about what is the response format of this specific provider in a specific thing that I do like streaming, for example. Joe Leo (54:42) Yeah. Valentino Stoll (54:44) the Joe Leo (55:10) Yeah, that's good. That's good to hear. Carmine Paolino (55:12) ⁓ falling short? Yeah, I don't really have any good options. Is there anything that you guys think it's falling short? Joe Leo (55:22) I don't have anything here. If I did, I would tell you. ⁓ But I don't. ⁓ think we're generally very happy to be using it. ⁓ Valentino Stoll (55:30) I think just the response is API. Yeah, I know. Carmine Paolino (55:33) It's not there yet. It's not there yet. Joe Leo (55:34) The response is API. Carmine Paolino (55:39) I also don't find too much of a reason to have a Responsible API too soon, because it seems to me that we're going to lose audio support by switching to that, which is really strange. ⁓ And the only thing that we will gain is like 04, I believe, and a couple of other models that are Responsible API only. But so far, ⁓ OpenAI is actually provided backwards compatibility through ⁓ chat completions API through pretty much everything. It's gonna come at some point, don't worry. ⁓ I see you disappointed. Valentino Stoll (56:14) Hahaha I mean, yeah, the biggest thing to me is the reasoning boosts that you can get out of, especially in the world where there's compliance risk and you need to like ⁓ take advantage of their ⁓ encrypted reasoning tokens to feedback into itself. There's just some optimizations that they have built in now ⁓ that you just see much, yeah, much, yeah. Carmine Paolino (56:43) Cool. Happy to hear that. Actually, I know about it. Valentino Stoll (56:46) Yeah, ⁓ there's a lot of great stuff kind of just like built into the new APIs ⁓ that give you advantages for specific things. ⁓ That's what I'm looking forward to, but I'll open an issue. Yeah, then I'll be like, hey, look what you're missing, Cameron. ⁓ I'm just messing around. Yeah, I mean, you could still use the Chat Completions API for most of it. ⁓ Carmine Paolino (56:51) Mm-hmm. Joe Leo (56:59) There we go, that's the spirit. Carmine Paolino (57:13) Yeah, yeah, that's kind of the point. But we will get there. We will get there. Maybe soon, maybe. Valentino Stoll (57:17) Yeah. Yeah, I mean, I'm definitely in the advanced category. So I'm already taking advantage of like a lot of the reasoning models, right. And the thinking and doing things ⁓ maybe that it's not even ready to do in a lot of ways, hoping that the bitter lesson catches up with me and I could just delete half my code base, you know, ⁓ but slow and steady. Joe Leo (57:36) Mm-hmm. Yeah, I know, it's fantastic. It's not so better when you think of it that way. Valentino Stoll (57:46) Yeah, right. Carmine Paolino (57:46) Yeah, true. Valentino Stoll (57:48) Except it will just keep generating. you know, by the time I've deleted half of it, it's generated another half. So, know, net new. Joe Leo (57:51) Right. Yeah, that's true. Valentino Stoll (58:01) Well, you we've been talking about a lot here, Carmine. Thank you so much for coming on and for creating this ⁓ fantastically designed Ruby gem. ⁓ I know a lot of Rubyists out there are just, you know, kind of over the moon with it ⁓ for good reason. You know, it follows all the things that, you know, we want out of ⁓ a library. So. ⁓ Carmine Paolino (58:06) Thank you guys. Thank you. Joe Leo (58:21) And it's a, yeah. And to tag on attack onto that. And it's also follows all the things that we know and love about the Ruby community, which is, know, somebody like yourself building this and yeah, I get that it's all the problem for you, but, ⁓ but you know, you kept building it, kept releasing it out in the open and it's grown to be this thing that so many people love. And, ⁓ I just, have a tremendous amount of respect for that. ⁓ and for you. Carmine Paolino (58:42) Yeah, it's really, it's really nice to hear that. Valentino Stoll (58:47) Yeah, keep it up. I'm gonna keep ⁓ following along and ⁓ using it more and more. ⁓ I've only just started. I have this habit or bad habit, good habit, I don't know, a habit of repurposing. We have this little like podcast admin Rails app that like does our intake and prepares for episode launches. ⁓ And I swap out the thing that runs it. Joe Leo (58:54) Mm-hmm. Carmine Paolino (59:04) Thank Valentino Stoll (59:12) every time we have a new guest on that has some gem. So it was on DSpy, now it's on Ruby LLM. ⁓ So it's just kind of like a personal experiment, but it was pretty painless, straightforward, like all the components just swap right out. So yeah, I think it speaks volumes to your library, just like very well designed. But anyway, if people wanna reach out to you, find out what you're up to, like what channels should they be looking at? ⁓ Carmine Paolino (59:12) That's bad. ⁓ Joe Leo (59:19) awesome. Carmine Paolino (59:28) Awesome. Yeah, thank you guys. I post on X quite frequently, so I guess that's one of the channels, probably the main one. ⁓ It's at Paulino, P-A-O-L-I-N-O. ⁓ Also, my GitHub is like at CRMNE, which is Carmine without the vowels, except the last one for some reason. And then my website is paulino.me. Joe Leo (59:49) Hehe. Carmine Paolino (1:00:08) So that's where you can find perhaps like longer blog posts and stuff. Joe Leo (1:00:11) Yeah, definitely check that out. There's some really good stuff on there. Valentino Stoll (1:00:16) Yeah, and we'll link that to the show notes and make sure that people can ⁓ follow along all this acing stuff, to me that's the most fascinating part of this. It just drops right in. Awesome. Carmine Paolino (1:00:23) Great, I have a blog post coming up. Joe Leo (1:00:27) Yeah. Yeah. Carmine Paolino (1:00:30) Maybe tomorrow, we'll see. Valentino Stoll (1:00:32) Yeah, yeah, all right. Joe Leo (1:00:34) Excellent. Carmine Paolino (1:00:36) Awesome. It was really a pleasure to talk with you guys. Valentino Stoll (1:00:37) well if... Yep, fantastic. If there's anything you want to share, last we have this little short segment at the end, just like things you want to point people to, ⁓ if you want to. Carmine Paolino (1:00:53) ⁓ no, just go, go make a Ruby AI a big thing. I think, I think we have the tools. ⁓ I don't think we should be shy anymore of actually using AI because AI has changed, right? AI is not like building models anymore. AI is building apps and who are the best people to build apps? I think it's a Ruby on Rails developers. Valentino Stoll (1:01:17) Yeah, I, I agree. We're not biased at all. ⁓ I mean, I think the fact that there's two people here that have successful businesses that they're operating in this space. And, you know, I've worked for a very large one myself at Gusto. like, ⁓ you know, ⁓ who cares if it's scales or not, right? I think we've all proven that it does, but. Joe Leo (1:01:17) You're right about that. Yeah. Carmine Paolino (1:01:22) At all. Joe Leo (1:01:22) Nope. Carmine Paolino (1:01:46) It does. It does. Joe Leo (1:01:46) Yeah. Valentino Stoll (1:01:47) At this point, it makes money. Joe Leo (1:01:49) does and there's room for everybody. Yeah, that's absolutely true. Yeah. Carmine Paolino (1:01:51) Yeah, exactly. ⁓ Awesome. Hope to see you guys at Test of Ruby. If not, it's been a pleasure. Joe Leo (1:01:59) Yeah, was great Carmine. Thanks, take care. Bye bye. Carmine Paolino (1:02:02) Bye.
Want to modernize your Rails system?
Def Method helps teams modernize high-stakes Rails applications without disrupting their business.