Luster with Christina Brady
Outline Summary: Luster's Diagnostic-Driven Enablement Platform
Intro
This transcript features Christina interviewing the co-founder/CEO of Luster about a revolutionary GTM enablement platform. The core promise is to diagnose skill gaps, predict revenue risk, and prescribe tailored practice for every rep, at scale.
Center
Origins and motivation
The founder’s 18 years in go-to-market roles fueled a personal mission: prevent customer collateral damage caused by untracked team competency.
Frustration with reactive training, rolling back to “train everyone on everything,” and uncertain KPI trajectories led to building a predictive, diagnostic-first solution.
Gap in the market and best-in-class critique
Traditional approaches rely on high-cost audits, self-reported gaps, and CRM data that reflect activity rather than proficiency.
Diagnosis was missing; enablement alone often failed to prevent revenue erosion. Luster emphasizes a two-step process: diagnosis then enablement, not enablement in isolation.
Luster’s core value proposition
Diagnostic-first: map skills at the atomic level across every sales stage, per role, and per individual.
Prediction: forecast which proficiency gaps will impact revenue within 24–48 hours, enabling pre-emptive practice.
Prescription: generate customized practice sessions, including proactive dress rehearsals tied to upcoming calls.
Two modes of practice
Full call simulations that mimic real customer interactions.
Skill drills focused on specific objections, with an AI coach providing targeted feedback.
Both modes leverage cognitive psychology: practice-based learning and modular skills reinforcement.
Data integration and data hygiene
Integrations with Zoom and Teams enable both ingestion of prior calls and post-call scoring.
The platform can diagnose from calendar-based interactions and automatically pre-schedule practice sessions tailored to imminent customer calls.
Data governance and security
Luster uses a platform approach with customer-specific, closed, trusted instances.
This design supports SOC 2 compliance, security hardening, and the ability to swap LLMs without compromising enterprise data.
Management and enablement at scale
Provides weekly, customized coaching reports to managers, highlighting proficiency, gaps, and concrete coaching actions.
Proactively tags managers to join particular calls where reps are not yet proficient, optimizing leadership bandwidth.
Turns enablement from a bottleneck into a scalable, autonomous journey for reps.
Enterprise-readiness and ramp programs
Highlights the high cost of bad hires and long ramp times; Luster supports prescriptive onboarding and faster proficiency gains.
The platform can assess candidates before hiring, ensuring skill alignment with role requirements and company context.
Future trajectory and product philosophy
The team prioritizes a robust, secure platform over rapid, vaporware-style features.
Balanced growth approach emphasizes stability and bespoke model training per customer, with flexibility to swap underlying LLMs as needed.
Outro
The host and guest underscore Luster’s potential for Lean Scale and other high-growth GTM teams, especially where revenue growth hinges on effective hiring and enablement.
Enthusiasm centers on a future where AI-driven diagnostics and personalized practice continuously improve sales performance with measurable impact.
The conversation closes with mutual anticipation for ongoing updates as technology and security standards evolve, affirming a commitment to enterprise-grade, reliable enablement.
Key Takeaways (Table)
Diagnosis
Atomic skill mapping for teams and individuals; two data sources (prior calls and scheduled meetings) for accuracy
Prediction
Short-horizon impact forecasting (24–48 hours) on revenue risk
Prescription
Custom practice, dress rehearsals, and AI coaching aligned to upcoming calls and goals
Enablement
Manager-focused coaching reports; proactive call tagging; scalable, autonomous learning
Security
Closed, customer-specific instances; LLM-Agnostic architecture; SOC 2-ready
Ramp and Hiring
Prescriptive onboarding; pre-hire skill mapping; reduced ramp times and better hire quality
Center Note: The dialogue emphasizes practical adoption, measurable outcomes, and a thoughtful balance between innovative AI capabilities and enterprise-grade safeguards.
Full Transcript
[Music] Christina, thank you so much for being here. Really excited. We have the co-founder CEO of Luster here to walk us through the product. I think what you've built is absolutely incredible. I can see a lot of use cases we could have at le scaling a sales team anyone customerf facing. This is an incredible platform. So I'm curious. I think I always love to kick these off. what gave you the inspiration to start Luster and what gave you the courage to get it going and bring it to where you are now? Yeah. Um first thank you and thank you for having me. Uh that kind of feedback my makes my go to market heart very very happy because that's what I wanted to build and I would say this company is sort of a culmination of my my pain and making sure that nobody else uh has to feel it again. And so if you look at my background, I spent the last 18 years of my career in go to market. So I started my career as a full cycle AE worked my way up to multiple different leadership positions, some sea suite positions across technology. Uh I've also been a trainer for the last 15 years. So I'm trained and certified in seven different methodologies. I know. Oh wow. I know it's wild. Uh and I've been lucky enough to train and upskill over 30,000 revenue professionals. Um and the one thing that was very very clear not only as I was working with individuals but also being an operator at various organizations is this idea that we make our customers collateral damage to our inability to properly measure the competency of our team and address a lack of competency before it erodess your revenue. And that's how every single go to market function. And that was frustrating for me to constantly feel like I was behind the eightball. I couldn't see the proficiency of my team. My KPIs are declining. The sales stages are increasing. Uh churn is going up. And then we look and we say, "What's going on? What's happening? Who's the problem? Is it the person? Is it the product? Is it our ICP? Is it a lack of product market fit? I don't know." And so you go through this reactive fire drill exercise where then you say, "You know what? I know what we'll do. We'll just retrain everybody on everything." Uh and then that will work. And then we stay in this tumble cycle. And so I said, there has got to be a way for me to identify that my team was going to step in mud on these calls or these processes. Like how did we not know that we were going to biff that deal until after it happened? How did we not know that? So I started looking for technology that would allow me to be predictive instead of reactive. Uh and it didn't exist. And what's crazy is so many tools advertise the fact that they are predictive, right? They're like, "We're going to predict your revenue." I'm like, "No, you're not." You know why? Because you're using CRM data, which is input by sales reps who are the furthest thing from analysts. I know because I was one and I've managed hundreds of them. So, okay, we could use their subjective data to try to identify their proficiency gaps. of the thousands of people I've trained, I've yet to meet a single revenue professional that is like, "No, I'm I'm actually keenly aware of these skill gaps that I have and where they're impacting my performance, and I know exactly what I have to do to overcome that." Right? Like that level of awareness doesn't exist. Um, and so using things like CRM data or previous customer calls to subjectively identify where you're going wrong is just a non-starter. And so there wasn't a tool that did this. And I said, I I have to build one. And then I look at my background and I was like, I'm probably one of the people best positioned to do that because I've lived this pain for so long, but I've also seen what works to uplevel people and why the tools that exist today are just inherently broken or incomplete. So that's the the long and short of just kind of why I said, "Okay, I'm going to do it." Yeah, that's really interesting. And like you said, you're an expert in this, so that goes a long way. What was best-in-class process before Luster? I always like to get into like what the comparable was in the market, right? Because the I mean before you go into that, it's kind of a two-sided coin. One, do the people know what their skill uh gaps are? And then in order to bridge the gap, it's a massive burden on the organization to do training and role playing and making sure that and do they even have the skills to train properly? That's a big part of it, too. It's not their fault that they don't the best what best-in-class looked like, and I've seen this done, and I've participated in it probably hundreds of times, is first we say, "Okay, we have to skill matrix the entire team." So, we're either going to do that ourselves internally in spreadsheets. Um, so first we're going to get together and we're going to build our own competency maps based on what good looks like. And we're going to have no way to measure those, but we're going to put them on paper. Um, or we're going to hire a firm to come in and spend hundreds of thousands of dollars for them to come in and they're going to analyze our team. They're going to analyze our team on CRM data and previous customer conversations and interviewing the sales reps, which would be the same way that you would do it if you wouldn't hire somebody from the outside. This process would take probably three or four months. And then you would get what I call the problem plop, which is you plop the problem and you say based on the research that we did, which was constructed on poor data and self-reporting, this is the problem that you have. So, welcome. And then you go, okay, uh, first of all, that was the problem that we had four months ago, but if you're working in any kind of scaling startup, it's probably not the problem that you have anymore. But then you're still left with now, what do we do about it? Because the same team that caused these problems is now tasked with no way to actually be able to overcome them. And so inherently, we rely on asking the revenue professionals what they think their competency gaps are. That would be like that would be like a doctor saying, "Yeah, I've heard your symptoms. What do you think your diagnosis is?" Right? And the patient's like, "Oh, I don't know. I mean, I think maybe this." And they go, "Interesting." And they write it down. And then they say, "Okay, you can go." And the patient goes, "Well, what about my treatment?" And I say, "Well, what I'm actually going to do, I'm going to ask every patient that I have today what they think their diagnosis is." And then based on the most popular answer, we'll just treat all of you as if you have that. Right? And the 10% of you where that is actually an issue, you might get better. The rest of you are going to get worse. And then you know who's going to get blamed for that? The sales rep that you now put on a PIP because you never properly diagnosed them and then prescribed them based on their needs. And so the way that we've done it before, the diagnostic is completely missing. Predicting where the diagnostic is about to erode your revenue completely missing. the prescription of what you specifically need to not make those mistakes and erode revenue completely missing. We just do everything subjectively and then treat everyone the same anyway. So, it's a waste of time and money. It doesn't work and it candidly will never work. Yep. And I've seen you say diagnosis before enablement. So, I think that's an interesting way. That's what best-in-class look like. It's like let's actually do diagnosis and then we'll enable. The shortcut was like let's just do enablement. Yeah. But do everything. Yeah. So, Um, yeah, that's really interesting. It's really interesting. Well, I know you prepared a demo for us today. I'm really excited to see Luster's approach to getting salespeople, anybody in front of customer to be the best version they can be and dive in and see how we went from hundreds of thousands of dollars of consultants that aren't really even helping hundreds of thousands, probably millions of time and expense of enablement to something that's actually tailored to the salesperson and scalable. Yeah. I mean to that point I've spent hundreds of thousands of dollars on other training at previous organizations to get maybe a max 12% adoption and that's if you could even measure it. So it's meaningful. So I have prepared I've prepared something to show you. So basically here we can kind of look at the back end of Lester and the first thing to note that is really unique about this tool and we talked about it a little bit is what Luster is going to do first is it's going to provide that diagnostic which is critical. If you deploy any kind of learning or development or training and you have not been able to accurately diagnose the needs of the team, it will be a waste of time. The other thing is you could even address a skill gap that somebody might have. But the prediction and point in time is also important. If you are addressing a skill gap that I have that I don't have to perform against on a customer call for two weeks, you're training me on that today. Two weeks from today, I've forgotten 80% of what you've trained me. And so it doesn't ingrain and it doesn't stick. And so the whole idea of luster is first diagnose. Right? So here you'll see this is basically what we have for the entire team and then also for every individual rep. So you'll see here that we break down your diagnostic at the atomic skill level. So whatever the skills are that are critical for your organization, your specific sales roles is what we break it down. So it's not this high level feedback in order to get a diagnosis. It's every stage of your sales or your support process. there are critical skills that you have to show a level of mastery in order to kind of get through uh that process. And so all of these are customizable and the first thing that we do is take a snapshot of what your entire team looks like. So where's your entire team's proficiency? We can also do that down to uh an individual, right? So if I am uh an individual performer here, we'll look at somebody who has completed something recently. I don't want to pick on anybody, but we might here. So we can look at an individual right? So any individual on my team I can actually also see their individual proficiency score so that I understand exactly where they are and where I need to coach them. So now we basically start with this level of diagnosis. Now we can get this in two different places. Place number one is you can practice in luster and we will proficiency map you because the simulations that we'll go through are going to mimic a customer interaction exactly. Uh the other thing that we can do is we can look at all the previous calls that you have had and proficiency map you immediately to identify where the gaps are w based on what good looks like for your organization. Right. So now that's that's pretty big. I just want to make sure um what's the best way to get that data into Luster? What tools are you integrated with and what's the best format for somebody interested in bringing that data into the platform? Yeah, so Zoom and Teams pretty much cover all of that. And so we have an integration with Zoom, we have an integration with Teams, so we can automatically not only look at all of your previous conversations for an individual, but then we can also grade your existing customer calls after you have them. So we can say, here's how you're showing up in practice. Here was your proficiency after that training that your enablement team just did. And now you went on a customer call that mimicked that training session. What's the delta between those two? So we can actually see again kinesesthetically how is it sticking, right? So we can actually now measure what's going to happen before the call, what happened on the call, what are the gaps, and now what data do you need to continue to uplevel. So we're looking at kind of the full picture there to again get a proper diagnosis. So then we talk about what is the point in time piece being so important. So let's say that this is my proficiency map. What Luster is going to do is Luster is going to consistently scan this proficiency map. We are then going to also connect to your calendar to look at the actual customer interactions that you have scheduled on your calendar. And we're going to look at your map and say, "Hey, based on that discovery call that you have tomorrow, you're about to make a revenue impacting mistake." So, we're going to custom create a practice session for you that mimics that call that you're about to go on, proactively schedule it on your calendar for you so that you can go in and practice and have a dress rehearsal of an actual call before you have it. So, these are kind of some of those more magical pieces that again are all in the diagnose what's wrong, predict where your lack of proficiency is about to impact your performance in the next 24 to 48 hours, and then you continue to do that in real time. And so, we diagnose, we predict, and then we prescribe the solution. I think something So, Joe's a big fan of doing the dress rehearsal. I know a lot of sales teams don't even do that as a process, right? But to have that baked into your product and predict when they'll need a dress rehearsal and schedule that for them and they don't even need anybody else on the team. They can do it on their own. That's huge. That's a big part. I mean, we're working with teams where I think the biggest constraint is the bandwidth of the management team to be able to do the training. So it seems like this could be guided by a rep and they can go on this journey of getting this assessment from an independent system based on previous data and then go through this process of improving themselves without needing anyone there. And I think that's crazy because that's a huge time suck if you're managing 10 20 30 reps like you cannot do that. No, you can't. And are you the right person to do it as well? like we can scrape the metadata of an actual person on your calendar and truly create a dress rehearsal for you to practice a conversation with an AI version of that person, you know? So, like whereas a sales manager can't necessarily do that, right? Like if you've never been the persona of the person your team is selling to, then are you the right person to help them overcome the skill gaps? Like it's just it's too much onus put on sales management and enablement versus the other thing about the diagnostic is we're not just going to diagnose practice for you. The diagnostic could be we're going to diagnose AI custom created content that Luster is going to build specifically for you for a specific call. So Luster is going to scrape the metadata of that meeting and say, "Hey, here's a custom battle card for that meeting. Here's a persona documentation that'll help you understand how to talk to this person or what they care about. Here's a list of discovery questions that you should ask to really learn how to painfunnel." So Luster is not just prescribing AI roleplay. Now practice is a huge part of testing. Are you ready before you do it? But it's not the only thing. Lester will even send every single sales manager every single week a custom coaching report for every person on their team based on the critical skills. Where are they proficient? Where are they deficient? Here's examples of that and here's how you as a manager can uplevel them. So then we're also celebrating the human managers by getting them out of the mud and showing them how to coach their people and helping the manager improve and like be better at their job, which again I've been a sales manager for 15 years. So, like having that kind of insight and not going into every one-on-one and being like, "So, what do you think you need to work on?" It's like it allows you to say like, "I know exactly what you have to work on. I have examples. You have that call tomorrow, right?" Luster will also identify to a manager when a rep still is not at proficiency before a call. Tag the manager and say, "This would be a good call for you to attend. The rep is not proficient enough to handle this call on their own." So, when managers are looking at saying, "What calls should I join?" They're now going to have data that tells them, I know for a fact that this rep is not proficient on this call. This is a good opportunity for me to jump on the call. Unless we'll proactively tag managers and say, you might want to join this call, right? Because they've practiced they're not getting there. It's not ingraining. This is where you need some leadership support. So, it's a tool for managers to uplevel too. So, this is basically what the backend looks like and these are kind of examples of uh exactly what different simulations that are custom created might look like. You'll see there's two ways to practice and so this is also rooted in behavioral and cognitive psychology. So there's two ways that adults learn and one is sort of doing a full call simulation where I'm practicing the entire thing and the other is focused on an individual skill. Right? So these are examples of different types of full calls that are created to mimic your sales process. Right? Is it a prospecting call? Is it a discovery call? Uh is it a QBR? Right? So anybody having any kind of customer conversations, Luster can mimic those. Is it a proposal call? Are we negotiating today? But then also skill drills. So to use a golf metaphor. You can either play an entire round of golf or you can just go and practice your swing. And so if you're having a difficult time overcoming certain objections, you go into the skill drill environment which comes with a built-in AI coach. The coach is going to give you objections relevant to your role. You overcome them and then it's going to give you feedback on what you could have said differently. You just keep getting hit with objections or product questions or security questions, whatever skill is identified that you need help honing in on. So, we can see here what an actual session will look like that has been prescribed. Hi, this is Candace. How can I help you today? Uh, hi, this is Christina calling from Luster. It is a cold call. Do you have a few minutes for me to tell you why I'm calling? Hi, Christina. Honestly, I'm a bit swamped right now. Can you give me a quick idea of what this is about? Uh, yeah. Candace, I was actually noticing that on LinkedIn you were paying a lot of attention to a lot of different AI enablement tools. Is that something that you're shopping for right now? Oh, AI tools, huh? I'm always on the lookout for things that can genuinely help our sales teams perform better. But I'll be honest, it's got to really address some key pain points we have. Can you tell me more about how your tool could help us out? Absolutely. The the first question I have for you is do you have capability today to measure the skill deficiencies of your customerf facing team and then use that data to prevent revenue impacting mistakes. Can you do anything like that at all? Today we do have some tools in place for tracking skill levels but getting a clear consistent picture across the whole team has been tricky and those insights aren't always directly linked to preventing revenue impacting mistakes. So how does Lester handle this differently? Well, I guess I would ask, how do you currently measure skill proficiency today? Right now, we primarily rely on performance reviews, manager feedback, and some analytics from our CRM, but like I said, it's not always consistent and it can be hard to connect those dots directly to revenue impacts. Does offer a more integrated or automated way? So, we'll kind of continue to have that conversation. And the thing to say is like this is not a scripted conversation. This is true generative AI. So every conversation is going to be different. We didn't prompt Candace to say that like that's how skill mapping is traditionally done. She learned that because that's how it's done. And then at the end of the call too, the feedback again is critical, right? Like what kind of feedback am I going to get? Is it going to be high level super generic or we'll kind of walk through here what the feedback at the end of a call actually looks like. So I scored an 80%. Typically Luster looks at proficiency being between an 80 and an 85% or higher and that is showing proficiency. So, I'm feeling good going into this call block, but you'll also see that it gives me feedback on every individual skill that it heard me deploy. Now, keep in mind what I needed to work on was discovery and call prep. So, I'm doing well here in the practice session. I may want to do one or two more because every single one of them is different, but I could listen to this call and I could review the transcript. So, essentially that. So, there you go. That's a little bit of kind of what the actual practice session might feel like and the feedback and then all of that scoring kind of goes into kind of the mass dashboard that CRO's can see, enablement can see and it kind of feeds into your proficiency overall and then this tool just kind of keeps doing that reading where you are prescribing custom practice measuring that looking at practice versus the actual call. So, what it would do then after this is then it would look at the actual call block that I went on and compare that to the practice sessions and then continue to prescribe more prescriptive training based on what your gaps still are or move on to something else since we've addressed that gap today. Wow. There's a lot of technology behind the scenes of that. I mean, you're transcribing what someone's saying. You're using text to speech. You're using language models to respond. Yep. There's a lot of latency uh considerations. I mean, how How has building a product and a simulation like that been with the current AI and where do you kind of see that going in the next couple of years? I would say it's it's interesting building AI right now because there's really two ways that it happens and way number one uh is is really quick tech which can very quickly turn a bit into vaporware right and that quick tech is I can build a user interface directly on top of a large language model and I can give the illusion of building a lot really really fast um and that's sexy for AI startups because you want to look like you have this huge robust tool that can do all of these things but But what you can't necessarily control for for are is your information being shared with the large language model. Do you have the capability to really make it feel like it is specific and contextual to the organization that you are giving it to? You can't really control the latency, right? One of the things that Luster does is as users are using Luster, it's going to measure your natural human latency and amend the product to fit you so that you feel like you're having natural conversations versus being more beholden to whatever latency that you set your entire product at. So when you're doing a custom build, you have the capability to do things like that. So the experience learns with the user. So you've got that kind of quick tech which is that sort of GPT wrapper and then you have more comprehensive tech that is a more platform approach. One of the other main differences between those two things is the quick tech is sort of one instance and then all customers are added into that one instance and then sort of separated out in the user interface versus something like Luster every single customer gets their own trained closed off instance. So it is a custom build for every single customer, but then we can control the information. We can protect it from a security standpoint, right? Like Lester is not sharing your information with the large language model ever. It's how we've been able to get SOCK 2 clearance type one and type two and pass through a pen test is because we're custom building every single model. The downside is we can't be as feature fast, right? like we can't keep building new features super quickly because every new feature because it's AI. You have to look at the security, the infrastructure, the stability, the operation, the performance. Um, and so that's why we have a larger engineering team than a lot of other startups our same size. And it's because we don't want to move slower, but we want to move responsibly. And so that quick tech versus platform approach, we made a very deliberate decision to go platform approach. So a lot of other folks just building sort of point solution in this space started building the same timer after us and all launched really fast right and then we finally launched but then we launched a better more robust secure product that we can completely control and at the end of the day that's what companies want is not the fastest tool but the one that's going to be the most stable and they want to trust a founder that's not selling on their product roadmap and so that's the other thing is like I will tell you what the tool has today and what it has tomorrow and if the limited use case that we had when we launched isn't a good fit for you, then don't buy it now. And if it is, then buy it now. And that's also how we've kind of gotten ahead in terms of go to market and build. We've built some really good customer loyalty. So, well, and I think the um the idea of security is becoming more and more important. Um especially when there's more AI companies concerned about where their data is going as well. Yeah. So, I think building something that's actually enterprisegrade, production ready, which this is a great fit. I I could see amazing fit for small teams. Yeah. Um but you really really get the benefits of the enterprise space. That's right. So if you have large teams um the efficiency and effectiveness that this could drive and you need to be enterprise ready for them, right? And that's what we've been building for from the beginning. And so when we launched the product in July of 2024, it absolutely was not enterprise ready. And we had enterprise customers, even people CRO who were in my network that were like, we'd love to buy this. We have a thousand reps. It's like I would love to go back to my investors and say that I have a $500,000 ARR deal. Uh but I'd be doing you a disservice and the product a disservice and then you're going to be controlling my roadmap instead of building the infrastructure that I need. And so we said no to some of those large enterprise customers. Well, not no but we're like wait wait until we are enterprise ready and that takes a lot of things not just in the product itself but again in the infrastructure and I I can't talk about enough the security of the product. you have a breach in one of these products, it's really, really, really bad. And so, we take security probably more serious than a lot of folks in our space. But again, I come from 18 years of go to market. I've been a company where there's been a data breach before. I've seen what happens when this goes south and when you build sloppy tech and I just refuse to do it. And so, what we mean by kind of a more robust model is if you look at this, this is a little bit of the luster platform. So what you would see in more of a GPT wrapper would literally be like the the user interface at the very top which is like hey the insights and the coaching and the roleplay the actual product which is this piece up here and then underneath this security layer would be the large language model that's powering the tool for us. Everything that exists within here is what we've just kind of affectionately called our you know Pearl is our discourse engine the lustery Pearl. we had to be on brand. But this is sort of where we have multiple layers of custom proprietary coded information that simply belongs to Luster and this is what we build for every single customer. So the first thing that we do is every single customer gets their own trust and security layer. So we build that on top of the large language model first. Building like this also makes it so that we are not beholden to an individual large language model. So we can actually swap out large language models based on uh the performance or the cost or even their element of security without impacting the core product. And so that's nice as well is we're not built on open AI and stuck on open AI. We're not built on clawed or entropic and stuck on them. We can actually swap that out which is really nice because we can be dynamic. On top of that is where we build a custom model to actually ingest the behavior of your people and then how we want Luster to behave, right? the persona of the people, the latency of the model, what we want your user interface to look like, the types of simulations that are going to be custom created. So then we kind of um build that ingestion engine to be specific for every individual company. On top of that is then where we kind of build in your specific insights. So we do a combination of taking your company information and then third-party metadata that exists on the web to train the model on how to be an exact replica of your customer or prospect conversations. And that's who are the personas that you're selling to? What are the critical skills or methodologies or processes that you want to make sure that your people based on their role are following? What kind of knowledge does the tool have to have to give proper feedback? So, it feels like you're getting feedback from a sales manager or somebody in enablement at your organization and not just like you didn't ask enough questions. Ask more questions sometimes. Um, and then goals, right? Every single conversation that you have as a sales rep or a CSM, there is an end in mind or a goal at the end of that call. We hardwire those goals into every single simulation so that it's actually measured properly. On top of that is in the conversational AI, right? The latency, what's the persona going to look like? What is their vibe going to be? The personality of the person. You can give the person that you're selling to like 45 different personality attributes. You can give them context like you're selling to a VP of sales who needs to buy a tool before they go on maternity leave in the next two weeks and they haven't started yet, right? like you can give it that specific level um of kind of conversational intelligence so that it really feels like a real call and then on top of that is what is the output that then comes after that? The output would be the per the predictive skill insights and then the action that is prescribed to it. So like this is the difference between just seeing like a large language model and a user interface and then everything in between is the customcoded piece of more of a platform type company like Lester. That's really helpful to see because I think the tech you're building is pretty it's pretty deep into what language models and speech to text and tech I mean you're kind of using it all right uh to the fullest extent and I think it's at that point we were talking about this a little earlier where it's about to get really really good I mean it's already really good right but we're getting to that point where even trying to build this tech five years ago would have been really hard just based on the quality of the models, AI, what was available. So, uh, your timing and the infrastructure you built with where the market's going, I feel like you're gonna just going to keep getting better and better and better. So I'm really excited to see what the next couple of like it's a great point because if you also look at every bit of infrastructure each of those are going to be improved in their own unique and specific way based on larger advancements in the market you know and so by building this way we have so many more points of being able to provide a more delightful product for our customers because we can impact it positively in so many different ways. So building it the right way first allows you to catch the upswing of technology without having to say we didn't build the product right in the first place. And so now that we have customers in the system, we're going to go back and build the plane while it's in flight. And and I get look, I've been in tech for a long time. I've worked at companies that do that. So I'm not dogging on it, but also I am. Um because uh that that again that then makes your customers kind of more collateral damage to you not building the product that you told them that you were going to build and then not being able to keep up to date on the new advancements that are made which is also cruddy for customers. 100%. No, I think you need to make it built to last and I think it's such a natural um experience and the way you've done it is really really intentional and elegant. So, I appreciate that. It's not feeling like you're interacting with, you know, an AI bot. It feels like a real scenario, which I think is important for adoption. Yeah. And making sure you're getting organic interaction with it, too. Yeah. Absolutely. Well, that's another thing that we hear is you'd be surprised at the size, or maybe you wouldn't, the size of companies when we say like, "Hey, give us your training documentation. Give us your persona information. Give us the skills that you're measuring your people on." And they're like, "So, here's the thing. We we don't actually have any of that, right?" Right? And I was like, "You guys are living on the edge, right? That is reckless." Um, but uh been there, right? Sometimes we don't have the time to stop and take the pebble out of our shoe because we have to keep running the marathon. I get that. Um, and so like let's not get to the finish line with bloody feet. And so one of the things that Luster can also help with is let's say that you don't have any of that information. That's where that third party ability for it to train itself really comes into play. So we can say based on the industry that you are in, the personas that you are selling to, the role that they have, what does good look like for other sales reps on teams like this, selling to these industries, and we can actually custom train or even build a methodology for you based on what is already proven to work for your industry. And so if you have that information, great. If you don't, we can then actually custom build that and measure it for you. Um, or we can look at what you already have and say, "Hey, based on the skills that you're currently measuring your people against, it might not be industry best practice." Like, we can actually give you that information. So, it's helpful to give a directing light to people on what good looks like as well. 100%. I think one one last quick piece I was hoping we could talk about is just the manager and exec you know use case because one thing we come into a lot is just teams building out a really intense growth plan and they have to hire and build capacity plans for so many AES sees customer success SDRs I mean the whole list so if you're someone who's going into a year of heavy hiring enablement is a very top of- mind thing. How would they work with Luster? What are some of the ways that you know they could leverage the platform to get a really firm grasp on the talent, the ramping and you know how it could improve some of those ramp times too? This is such a good question because the last time I checked it, the cost of a bad hire and go to market of a frontline revenue producer, right? Like account executive. On average, the cost of a bad hire that turns over within one year is over $300,000. Like the cost to the business, and it's probably larger than that. And so what you're actually honing in on is another one of our core use cases. The reason why we get hiring wrong so often is one, we have to take people's word for it. two, people don't know how to actually hire. And three, sales reps, if we're talking about that persona, are really good at interviewing. I'm like, sales reps are usually pretty good at selling themselves. And the problem is the best sales rep at another company does not mean that they are going to be the best sales rep at your company because everything is so different. And so no matter how much you interview people, put them through in-person role plays, competency map them, put them through panel interviews, you are still taking a wild guess. And then when we do hire them, we don't know how to ramp them anyway, right? You go to six different people at an organization, you're like, "What's your ramp time?" And they're like, "Uh, you know, they'll kind of look like there's no common denominator." So, the first thing that you can use Luster for is instead of doing that kind of weird cheooy interview role play that you do, uh, you can actually have them role play and do that interview in Luster. So, now I can actually get a skill proficiency map for you before you even start. So, now I can see, are you walking in the door with the critical skills needed to be successful at this company and in this role? So, then you can say, hey, based on what good looks like for a new employee, and you can amend a simulation to measure a new employee, right? they're not going to have the product context, all of that. But then you can build a simulation that measures what are the core competencies that you currently have and what kind of product knowledge do you actually have that. So then you can either say, "Hey, we're not going to hire you because now we can actually measure your skills and we know that you're not a great fit." And that also keeps you safe from an HR standpoint because you're not saying, "I just don't feel like you're the right hire. It's hey, we have an objective measurement for this." Uh, and then if you do decide to hire them, instead of doing this one-sizefits-all onboarding for everybody, I now know the skill gaps that you have that are going to slow you down from producing revenue. So, I can now do prescriptive onboarding with Luster. So, while you're going in a classroom setting and learning all things about the product, Luster is then going to meet you back at your desk and actually help to uplevel you where you have those deficiencies. And so, that's where we have seen the ability for enablement to really shine in onboarding managers to know where to coach their new reps for the first time. They know based on luster when they should get on a call or when they should not. And so we've cut those ramp times in half. And we're actually helping companies define what ramp looks like, right? Like what is your ramp time? We can actually measure the proficiency of your team increasing when they are hired. So you can now have a standard denominator of what does ramp look like? How do we measure people and not continue to pip amazing employees that you just didn't uplevel properly from the get-go. Right. Yeah. Well, and I think for the right rep too, a rep that you'd actually want in your company, they would look at this as a positive. So, this isn't just like spyw wear, something to like put you through unnecessary steps. That's right. They care about hitting quota, making money, and if you're giving them tools where they can coach themselves and they can improve their own skills so they can make more money and crush their quota every single quarter, the right rep will look at that as a positive. So, I think I don't know if some managers or some reps are listening to this and thinking like, "Oh, I don't want to like have to put somebody through all these steps," then they're probably not a good fit anyway. Right. Well, and the thing is is you're going to be measured based on your performance in one of two ways. Either by your performance itself once it's already declined and you're not making any money and on a performance improvement plan, right? um or the capability for you to proactively identify before it erodess your KPIs and your metrics that you have things to work on and do that in a safe space so that you can only shine or you now know when to go and ask for help and what you need help on. And so we tell folks like Luster is not meant to be a performance management platform. It is meant to avoid performance management for reps who should be at your organization. So we say like make your mistakes in Luster. Try new talk tracks here. do weird stuff here, fail, right? We have reps using this tool and they will actually share horrible calls they've had and be like, "This guy was like Bowser. I could not get through Tom and procurement, right? And they share and everybody laughs about it." Right. Right. It also then creates this culture of learning and failing forward versus the culture so often which is I don't want my manager to know that I have any issues. I don't want my manager to know that I'm not ready to do this job. And so then you get on a call and you're terrified. You can take all of that away by providing a real area for your team to play in and actually enjoy getting better at their job and see the results on paper. Yeah, I think that's awesome. That is awesome. Um, CS teams too. Yes. Solutions engineering teams, sales engineering teams, BDRs. So, there's applications for all of those people. And, um, yeah, I think it's it's awesome what you built. If you're if you're watching this and you're interested, how do I get going? test out Luster, get in touch with you. Yeah. Um, what should they do? What's the quickest way to get on the the the platform? So, you can I mean, if if you're like, "We want to kick off a PC right now." Love it. Email me Christina luster.ai. You can go on our website and fill out a demo form. You can find me on LinkedIn. uh anywhere that you can find luster anywhere on the open web is going to eventually get you to me or somebody on our broader gotom market team and so we are very very buyer friendly. We don't believe in holding our buyers to rigid processes and so whatever way that you need to be delighted and feel really confident buying a new tool is something that we're going to make sure that we do for you and we're also not afraid to let you test it and pilot it and try to break it first because we're confident in the tech. So, customers have come to us and said like, "We're just going to sign up for a year out the gate." I'm like, "Great." Others are like, "Will it do what you say it will do?" And I'm like, "Test it, right? Because it will." And and then you'll buy it, you know? So, however you need to buy for your organization, um, we've got you. We're just in the business of delighting customers and keeping them. Well, Christina, this has been great. I think it's something we're probably thinking might be a good fit for Lean Scale. I know it's definitely a good fit for a lot of the companies that we work with. You know, when we're inside doing revenue operations, go to market operations for high growth VC back startups, they're we say it all the time, the growth model is won or lost on hiring and enablement. And I think this is the most elegant and intentional way to make sure people are getting the tailored training that they need and getting it diagnosed in the first place of where they need help and where they can increase their performance. So, I'm so excited about what you're building, what you're doing. Can't wait to see what you do next. Especially like Joe is alluding to, as um just the uh foundational technologies continue to improve. I know you're gonna take advantage of them and can't wait to have you back when you do. Oh my gosh, I can't wait either. Thank you so much. Music to my ears.
Last updated
Was this helpful?