
This is software (AWS) generated transcription and it is not perfect.
The best place to start this story is probably in my undergraduate degree, which was a bachelor's in psychology. I was drawn to, psychology degree in high school and in my sophomore year at U C. Santa Cruz took a quantitative psychology class, which was basically an advanced statistics class. What actually really grabbed my attention in that class is, that most of the lectures were completely done through the professor just coding. And so he would ask a question and, answer it by just coding in an R terminal, up on a projector. And that was the first time that I really saw that there was not just power of understanding statistics and the mathematics, but how in general coding and then more specifically, scientific or statistical coding can help you answer and do things with data that that just wasn't possible prior to that. From that, I started pursuing a quantitative psychology PhD, which I did at the University of Notre Dame. There, a lot of my projects and masters and dissertation was focused around the intersection of more traditional psychometrics and high stakes testing and specifically around a field called item response theory with big. How do you do this at a scale that the testing industry didn't really a background? Now, my draw boards, psychometrics, were based on my advisor and her, expertise in the field. I was drawn toward more big data and the scale of the problem through an internship that I did with Pearson, which is just my current employer. There, I was working with Dr John Barons, who's still a vice president at Pearson. And he was faced with the challenge of taking massive amounts of student data that Pearson had collected through its existing products and trying to figure out what to do with it in order to draw insights both that could inform and write feedback to the product as well as you inform future products. Now that, to me, that just continued my beliefs in both the importance of data and the importance of leveraging and not just to make business decisions, but to inform product and product development. Coming off of graduate school, I had, my first job at a more traditional, assessment company called NWEA. I left that company, after a fairly short tenure and came to work at Pearson under a research and development lab that started pursuing more and more machine learning and artificial intelligence capabilities as both the talent for those techniques became available. And the team acquired that talent and both and developed it internally as well. And as the company of Pearson started to shift towards investing more and more into AI. And so, over the past, 3-4 years I have both, personally developed the skill set that's more aligned in artificial intelligence and specifically, and deep learning and reinforcement learning. And the company as a whole has shifted their investment to support hiring talent in those fields as well as building the talent internally and so probably, year and 1/2 ago, I got my own team and started off with small team of two people that was tasked with bringing reinforcement, learning capabilities into on educational product, and we launched that product of a few weeks ago. It's a calculus tutor named ADA, and it's an iOS store. And now the team of data scientists underneath me numbers 11 and will probably grow to 13 or 15 before the end of next year. And so that's kind of the broad story. I think going back to the first interaction with that professor in our terminal. I don't think I could have predicted that I ended up here, but certainly, certainly it's been a journey, and it's tough to pull out just a few of the key incidents or experiences that shape that career path, I think. More recently, one of the biggest things has been the opportunity of my leadership on DME, IVP, and other executives to have early opportunities for leadership and to be a part of both strategy and vision meetings and then executing that vision and bringing it down from the VP level to what do we actually do this week in order to meet our product deadlines and to build these new capabilities that the world hasn't seen.
Currently, we have made a strategic decision to hire people that are mostly going to be working in the office. I typically like to work from home about, you know, one day, a week or one day every few weeks. That's more for needing, administration time or just time to really dig into something that I want to work on independently. That's typically how we use working from home. Most often we're working and I'm working in the office. And that's mostly due to the value of standing in front of the Whiteboard with someone and building those personal connections and having those, those conversations. So going back to the first part of the question about responsibilities. So I am accountable for delivering capabilities in our products. And so, for example, I mentioned this reinforcement learning-based recommend er's that are in the app today. I was accountable to our executives to deliver the code that met those product requirements and the product SLAs. Now the challenge for that and I think a challenge for applied R & D groups is that you have to both innovate and push the boundaries of what's possible, but also do that within a successful product launch time frame with integrations for both the front end and UX and UI, both design and development as well as, our engineering partners that actually put the final touches on the code. So my responsibilities that at the beginning of these kinds of projects as to understand, what is the executive vision? For example, our senior vice president, Malena Maranova, when she joined Pearson about a year ago, she said we were launching a product within a year. And these are the kinds of AI that it's going to do, including reinforcement learning, including handwriting recognition. And then it's my job to figure out to bring that down a level to say, what does that actually mean? Working with our product team, what are the actual product requirements and writing those down, then, taking not a level deeper, defining what are the API contracts to communicate between the different capabilities and then going a level deeper from that within this single recommender what is the actual code that will deliver this? And how do we know it's working? And how do we know how to improve it after the product launches? So, really, my role is a bridge between the executives and the day to day mostly coding tasks. One of my biggest responsibilities is to ensure that my team is both aligned and correctly prioritized to deliver on the executive's vision and strategy. Also being a team manager, there's a lot of responsibilities that I have to the individuals on my team. So anything relating to HR promotion, compensation, fall on the manager professional development, finding support and identifying appropriate conferences, for example. And, just everything that comes with being an employee. I'm the person that the direct reports come to just like I go to my direct report when I have similar needs.
The easy answer is we use Python. But that's a vague answer. The better answer is that use the tools that are going to get the job done. And so, most recently, on the back end, that's been python oftentimes supported by Flask or Django or Falcon, depending on the actual needs. For deep learning, we've mostly used TensorFlow, but there have been issues where we have a solution that TensorFlow doesn't support, but PyTorch does. And so some people learned PyTorch in order for us to use those tools. From an engineering perspective, we've been using more and more of Docker in order to actually deploy and support our integration with the other services. A lot of our cloud infrastructure is based now on Google Cloud. And so we're leveraging a lot of those tools and data store and logging and etcetera. In terms of, algorithms and models. Again, we and and I hire for people that can learn and can solve problems. And so it's much less important that you know, we are a Computervision team that is really good at CNN's It's much more important that we are a team that can understand what our needs are of the group and then find the right solution to that on. For example, reinforcement learning is a very interesting piece. When we're looking at. So, for example, I just got back from NeurIPS. If you look at a lot of the reinforcement learning that's happening in NeurIPS, it's happening on, video games or simulations. And so if we take you to know, kind of the basic example of Quadcopter, trying to teach the quadcopter to land in a simulation, you can fail millions of times before you figure out the right way to land that quadcopter. For us, we were launching a product that had zero data to start with, and so we had to build a solution that would use reinforcement learning but had never seen data before to start with. And that is a very different solution we didn't have and we would never have an education. 10 million tries to do wrong before we figure out the right approach, and so we tend to lean towards much more interpretable models. That doesn't mean that we don't use deep learning because we do in in many cases. But the more that we can explain why the model is working and how it is working, the better. And so there are a lot of solutions we have that certainly have a Beijing flavor to them that not only have Pryor's that allow us to smoothly transition from no data to a little data to a lot of data, but also in many cases, have a generative capability to them as well. And so we can not only used the model for predictions but can understand how it's representing the world and how it is understanding the problem that we've given it, so a little more concretely. A lot of the innovation is not that we do is not in the actual algorithms or the more core computational science part of it. A lot of the innovation that we bring is finding the right solution and adapting it to the right problem. A lot of times we maybe find that in theory and then our writing that from scratch in you know, fairly based python, pie, for example. Other times we see that you know, the right solution involves something that's related to object detection. We've done this in the past where we found, uh, tensor flows object detection Repo That sets out this is how to get to state of the art from five years ago. And we just start walking down that list of models until we reach one that suits our needs on dso much more than than a clear set of tools. We need and we value people that can find the right solutions to the right problems and adapt their toolset to build that solution. The engineering teams and both front and back end have the same philosophy where, yes, we might be spending 90% of our time in python or 90% of our time in Swift. But what's more important is knowing the art of engineering. And it's more important to understand what is going on from a more higher level and it's fairly theoretical perspective. But to understand what you're trying to do an abstracted engineering task than it is just to get the lines of code, right? So it's not just a data sciencing.