
This is software (AWS) generated transcription and it is not perfect.
um I came to the U S. To study artificial intelligence on in those days in India. At that, you couldn't really study ai. Uh, you could do it at the Masters level, but not at the bachelor's level. So came to Stanford to study I on, and I was very lucky in that Stanford still had the last generation of professors who had struggled to succeed with the I. They had building systems revolution, and they had done some good stuff with it. But fundamentally, it had failed. It had not gotten that adoption in the market. So I got toe from people who were not hyping it up, who actually had some kind of a healthy skepticism and humility about the use of a, uh, ended up working at various places like Microsoft Oracle etcetera, then went to Harvard Business School to get my MBA. Uh, ended up taking a class from a gentleman called Clay Christensen, who wrote, uh, dilemma. The idea of disruptive innovation comes from play. So I took his building Ah, sustainably successful enterprise class on. I asked him to be my advisor for a research project I was gonna do, which become a business on Contest, which became my first startup called Beyond Court on B, also was using AI to automate three extraction off information from data. So basically, you pointed at a database and it would come in and tell you Well, here's a PowerPoint slide of what's driving your revenue and what you need to look at that got acquired by Salesforce ended up doing such force for two years running the Einstein Discovery business, their sales force beyond publican court to the Einstein part of platform. After that, after two years of that, I basically decided I had to kind of go in and rethink E I, uh, forces fantastic. They give you sabbaticals and things like that. So I managed to get a nice sabbatical and ah, good permission worked on a book called A Is a Waste of Money without my two decades off experience with the I at the time. And I said, What is wrong?all I knew Waas he I was broken. I didn't know how is gonna fix it. But basic premise Waas ai is fundamentally broken. That was the genesis of able Andi. Um it's been an interesting journey since then.
so fundamentally able is the first AI platform that is focused on delivering our Roy. That may sound very, very strange, but if you do that, how a I was strained two years back when able first started, it was all focused on accuracy. So you would have data scientist, obsessive accuracy, maybe speed, maybe cost of training they were not talking about. What is the economic impact off the AI on business? That was kind of a secondary consideration on most data. Scientists actually didn't fully understand what drove the business constraints. In fact, if you look at how a I was evaluated, you didn't even consider a simple basic constraints like, let's say, in a fraud, a use case, you only work on 5% of the transactions or in our cells use case you work on 10% of the transactions. But we would evaluate ai on how well it does for the entire population. Let me a reasonable question. If you're never going to work on more than 5% of the deal, does it matter to you? However, the AI does from the 6% on? No, of course not. You just about how well the A I will do for the party will work on. So when I say, Hey, I was broken and it was not the focus on our why I can give you a story off the story. But we just said, If our laser focus is on delivering a business impact, quantified R A y. How do that on that constant to fundamentally rethink how we even training act. So instead of training I for accuracy and then adjusting it so it makes money, we actually started asking customers questions about Well, what happens when we give you a correct prediction? What will they give you? A wrong prediction. What's it cost? What's your benefit? How far can you work into the population? And now let me train and AI, whose custom lost function the thing is optimizing in money, money as by you or if it means, need not be money. It could be lives saved. Whatever is the business metric you care about. Let's make any act optimize for that. What a strange idea, right? I still remember the first time I was talking to analysts about it, our competitors that we had a big conference and weighted impact, not accuracy was our big big tagline. I know competitors were going around and telling customers A Yeah, yeah, because they're models are not accurate. So they have to talk about other stuff, right? Our models accurate. So we don't have to talk about that stuff, do you? Later. Every major vendor in the market is not talking about impact and our A and optimization because their product can't do it yet. But at least they realized over the last two years how fundamentally broken their basic approach to hear waas right on. But just one part of it on the part of it is if you think about the AI creation process, it is fundamentally broken. So what happens is a business exact on. An analyst comes up with a place where they need a I. Then the quarter data signed this to be free, then have a Siris of meetings where they're trying to explain to the data scientists what they want and the data scientist trying to explain to them what can be done. Then only the data scientist goes off. Spent several months creating, ah, something several months, several weeks. Then they come back and try to explain it to the business user, and that's a very fun conversation on. Then they will times and then they decide the model is ready to be deployed. Now the whole story starts again because now they go to an I t person or a developer and say, Hey, get this model deployed on now after the exact same story because now you're speaking out what the model is. What the data cleansing waas. How did you transform the data for your So that you can predict somebody goes in and writes the cord? Then somebody goes in and test it several weeks. Several months later, you have something to play three months. Six months after that, you find out whether your e I was successful. So we took a challenge where we said, Can we take a customer from raw data toe Lloyd ai and deliver value in less than a day? And that sounds like a bizarre idea. Right? Right. We got it wrong. A day was the wrong length of time. There is a public, a study by a gentleman. Well, Charlie Mirror, who's the CEO off sewing company Mirror sewing company. Hey, was able to go from raw data to deployed model to value in two hours, and he found $3 million of value and he personally did it. It was a CEO, not a data scientist, not an expert who did it on a narrow road up a case study on it. But I It may very well be a world record for in a real business context, going from raw data in salesforce to model deployed to a CEO, being able to understand the results and be able to do something with it, where he found $3 million a value in two hours. That's kind of ridiculous. So I keep telling my team, I'm just not I'm not setting tough enough goals for them. One day was clearly too much time.
first thing about so the first thing is how to do the school of the minimum viable product. See, the problem with defining a minimum viable product is writing a spec, for it is completely useless. I get back to set challenges. So before we released our product, we said a really public challenge for us act uh, at u C. Berkeley, there was ah ai conference happening on we sat a concept. We still have a contest where we took a bunch of high school kids. History majors, MBA students give them two hours of training on able and put them up against expert data Scientists now remember, we have not launched the product, and nobody has really use this product, right? So you can imagine yet my engineers on. But my everybody claims their product is easy to use. Like our competitors have ridiculously crazy, complicated products. Do you think big wake up in the morning and say we have ridiculously stupid, crazy, complex products. They sit. We have the most easy on in the world because they give it to them. It's easy. They're not loving. It is just that. To them, it's easy on many of these companies, the CEOs that data scientists and that's that. So even for there to you, it might be easy, but very riel person. It's not on why the two are challenge. Why should it be that we only provide two hours of cream? Because what I've found is almost any person would spend a couple of hours to learn something important. But over two hours is an effort right on why high school kids, history majors, MBA s. They're not corrupted by what they can do. Students tend to have a willing suspension of disbelief. If I went to executives and said, Hey, do it in two hours, they would just say No, it's not possible. Ah, high school kid doesn't know what is impossible yet. So the students on That's how we tested out the minimum viable product. Guess what happened at the end off the two hours the high school kids, the best high school kid had beaten every expert data scientist on the data scientist very reasonably pointed out like this is unfair. We can't do anything in two hours. So we they said they wanted three days and they gave them five days after five days, only four out of 11 expert data scientists beat the high school students on to me. You know what you know after five days if if, if less than half of the data scientists beat the high school students, that's actually a pretty good place to be right by defining a minimum viable product if you the problem people have with defining minimum while the product is often it becomes a compromise because development will say no, no, no, this is too too hard. Sales will say No, no, no. I need a least this much and you're doing a compromise instead of when you turn it into a challenge and something scary. I'll tell you, I was scared when I was going into that conference room because it was public. If Abel had fallen on its face, that information would have been public. Okay, but if you believe and you believe in your team, you tested out the second minimum viable product test had put was the one I mentioned to you. Where can we create R A y in a day that ended up taking us a few more months after that? But once that point we actually went in on when Fuji A with that product. And it was a very interesting ramp up after that. Yeah, and the product evolve over time. Uh, so right around the time when we got all this done, we're like, Hey, we have a fantastic product. It's gonna be great. We were gonna launch a big launch on Coveted on. One of the interesting thing about Cubit is every model you've created based on static data is out the window when the world changes like that. So we looked around the team and he said, Look, we had a really important vision that we started with, but that's not the only problem right now. It's still valid because you still need Are you still need it fast? But, man, you can't create a single predictive model and say That's good enough with the wall. Keep changing on Over a span between March on bond, July August, the team completely changed our capabilities. So instead of creating our predictive model, we started creating a portfolio off predictive models which are under different circumstances. Right, So one model is efficient. If you average deal, size goes up 15%. Another model is efficient if you're sales team reduces by 30%. So essentially, what happens is depending on how the world is doing. For with covert, different models will be optimal but all created up front. And now you're just moving to the appropriate model. Uh, that was a fundamental redesign because if you look at the entirety of our competitive space, everybody's obsessing over creating a perfect model. Even if they use model assemblies, they're actually having the models. What to become One model objective is to liberate one model on. We are creating many, many models that are deployed at the same time in the form off server less because if you want to deploy many models, you can do service anymore. The cost would be provocative. So you can actually think about the develops part of this. The deployment part of this on we did all that. So the instead of getting scared by guv, it the company basically turned activity. How do we riff on what we believe in to really create value for the customers today?