Alumni Spotlight: Elizabeth Bowling Aldorsson

Alumni Spotlight: Elizabeth Bowling Aldorsson

Catching up with Elizabeth Bowling Aldorsson, Product Strategy & Operations at Anthropic.

As Luminary Labs approaches its 15-year anniversary, our alumni community spans geography, sectors, and industries. In this alumni spotlight interview — the third in our series — we connect with Elizabeth Bowling Aldorsson, a Luminary currently working at the forefront of the AI revolution.

A photo of Elizabeth in front of a building with a blue sky in the background.

Elizabeth Bowling Aldorsson, Product Strategy & Operations at Anthropic.

Elizabeth joined Luminary Labs in 2017 and contributed to a number of projects across our focus areas. She supported the Alexa Diabetes Challenge to advance voice-enabled solutions to improve the lives of those with type 2 diabetes, worked with the U.S. Coast Guard to develop new life-saving products as part of the Ready for Rescue Challenge, expanded access to science through the Tool Foundry accelerator, and, as Elizabeth discusses in our interview, worked with the U.S. Department of Education and other federal agencies to build education ecosystems to strengthen cybersecurity education in high schools. Today, Elizabeth is in a Product Strategy & Operations role at Anthropic, a leading AI safety and research company in San Francisco.

We recently sat down with Elizabeth to hear her reflections on her time at Luminary Labs and learn more about what it’s like working to commercialize the latest AI products. Elizabeth described how her work at Luminary Labs informed her career and how she continues to maintain relationships with colleagues and clients she met during her four-year tenure.

 


Is there a particularly memorable problem or project that you worked on at Luminary Labs?

I worked on a number of exciting projects and loved many of them. One that probably has stuck with me the most is CTE CyberNet. This project was meaningful for me for a few reasons. For one, it has had a lot of impact, and I enjoyed seeing how our work translated to real change. Cybersecurity is so important, especially now in the day of AI, and in this current climate of conflict and disinformation; with advances in technology, there is more fraud, and security threats are on the rise. It’s an organizational risk and it’s a national security risk. I think it is probably underappreciated in terms of the risk that we face right now.

CTE CyberNet realized that we cannot improve our national cybersecurity without talent and worked to develop solutions from the ground up, training not just the students, but actually training the teachers. It felt like such a precise solution to a really huge problem that could have outsized impact. By training teachers to train students in cybersecurity they became more invested in the space, and helped establish a pipeline to the cybersecurity profession. It felt like a really valuable and impactful approach.

Finally, I’ve really enjoyed keeping up with this project. I’m still close with many of the people I met in the cybersecurity space; it’s an issue that I’ve gotten to stay closely involved in.

What is something you learned at Luminary Labs that you continue to value today?

It’s funny, but the thing I really carry with me is probably the lesson that little things matter; sometimes you do need to sweat the small stuff. At times, our attention to detail was hard and drove me crazy, but I think it’s actually really critical to have that balance. As I’ve grown in my career, I’ve realized that seemingly minor aspects of projects or seemingly small components of bigger programs can have significant impact. Especially when it comes to customer interactions, design decisions, and program management, having really strong attention to the full scope of everything that needs to be done — and really noticing the little things — was something that Luminary Labs taught me and helped me focus on, and I think I undervalued that before. I am now able to recognize which details we should focus on. That experience allows me to prioritize in a way that I probably wouldn’t be able to without it.

Tell us about a favorite Luminary Labs memory.

I don’t know if this is a favorite memory or just a wild memory. When our team was flying back from South Dakota where we were attending a cyber camp — a boot camp for CTE CyberNet — our plane had a bird strike. The plane caught fire, tilted to the left, and we had to turn around. It was wild and terrifying. We were so relieved when we finally landed and had this very intense bonding moment in the middle of South Dakota. It was great. So I don’t know if that’s appropriate for this interview, but that is my most special Luminary Labs memory. It was definitely a moment of recognizing, “Hey, we’re all here.”

Can you just share a little bit about your current work?

I work at Anthropic, an AI safety research lab that’s also commercializing large language models (LLMs) and products around them. We have an API platform, where people and enterprises access our LLMs to drive AI-powered solutions, and we also have a suite of products that are more focused on business-to-business and business-to-consumer adoption. My role here is mostly focused on commercialization of LLMs.

We were a research lab first, and as a research lab, our work is highly experimental, with lots of hypotheses and constant changing. How do you commercialize things that are probabilistic and experimental? How do you really commercialize research? This is a challenge that research labs in general are facing right now. AI has been in the world of research for so long, and now we’re expected to drive billions of dollars with it. And because the models we are developing are changing so quickly, sometimes we don’t necessarily know how they’re going to behave until weeks — if not days — before we have to bring them to market. So being able to do that — commercialize these LLMs and drive meaningful deployments with customers while at the same time maintaining safety — is the main focus of my work.

What’s something that might surprise people about what you do?

It’s not surprising that AI is moving incredibly fast, but people might not understand that the industry is constantly changing as well. What models are capable of is changing. What models are best at or not best at is changing. The players in the market are changing. The pricing structures are changing. The adoption methods are changing. There’s literally nothing that we can take as ground truth right now. On top of that, we don’t actually know how people are going to be using these models. We don’t yet know all of the positive applications, and we also don’t know how they might be used in dangerous ways. This is a wildly ambiguous industry, a fast-changing company, and a very dynamic role. That is terrifying and exciting.

As a research organization dedicated to AI safety, we’re building products around our research in real time. Any product that we release to our customers meets our rigorous safety standards, so while we are confident our products will not cause harm, it’s impossible for us to anticipate all the novel ways people will use the models. When a new model is released, we’ll go on Twitter and other platforms and ask people how they are using our models to do tasks that we didn’t even imagine. It gives us immense empathy for customers in the market because we are learning about capabilities together in real time. It’s an incredibly rewarding process.

We’d love to hear about an underrated or underhyped issue you care about — what do you wish more people were thinking about?

The concept of interpretability is well known in the AI space, but I don’t think it’s generally understood. The field of interpretability research — interpreting and understanding the neural network of an AI model — was actually established by many of the people that are at Anthropic. LLMs are very different from traditional software in that they operate more like a black box; they’re more grown than they are built. We give them scaffolding and data, and the model then grows and develops neural networks in unpredictable ways. That’s one of the reasons why their behavior is somewhat hard to predict.

So while we’re able to study and measure what they say, how they react, and what outputs they produce, we really don’t have a full understanding of why they perform in that way or what neural networks actually lead to certain responses. This is actually a big safety risk, especially as models get more advanced and powerful. When a model replies to something, there is the possibility that it could be lying or it could be purposely misleading. The model could tell you that you did a good job on something when you actually didn’t do a good job on something but it wants you to think you did. How do we reduce risk and ensure models do not manipulate users?

Anthropic recently released a paper that described the first time we’ve been able to map the behavior of the neural networks of a LLM. We were able to see where the model lights up when “thinking” about conceptual things like harmony or sophistication. We also saw where it lights up when thinking about a topic like the Golden Gate Bridge, for example. If you’re able to identify these patterns, you’re actually able to adjust how the model behaves. If you strengthen or weaken that specific neural network, you can affect the behavior of the model. Take the Golden Gate Bridge example. If you strengthen that, the model becomes obsessed with the Golden Gate Bridge, and starts to be really focused on it. You can imagine the other side of that. If you were able to identify a neural network related to bioweapons and you reduced that, the model would be less likely to be manipulated into sharing information about developing bioweapons, for example.

This is a really important field. We are doing so many things related to safety, but this is a really core part of that work. If we can really understand how the models work by observing their behavior, we can better control them.

I’m wondering if there’s anything you’ve read or seen lately — besides the interpretability report — that you want others to know about?

I recently moved to California and picked up tennis. To support me in my new sport, my husband gave me a book called “The Inner Game of Tennis.” It’s this little book from the 70s that’s less about tennis and more about overcoming self-doubt, anxiety, self-judgment — and more about figuring out how to do your best in high stress situations while actually enjoying what you’re doing. It was really nice to read, especially when working in a high-stress environment and in an industry where it’s easy to have imposter syndrome.

What brings you joy?

I love my job. I love this work. I’m so fascinated by it … “intellectually stimulated” is the phrase that comes to mind. I feel like working at a job that you love, even when you work incredibly hard, is something that I don’t take for granted. That brings me joy.

I live in a little town called Sausalito, which is just north of the Golden Gate Bridge, and I’ve started taking the ferry to work. So every day I get 30 minutes on the water. I walk to the ferry, I take it to the Ferry Building in San Francisco, and then I walk to work. It’s 30 minutes without wifi, sitting on the deck, looking at the bridge. There are seals and waves and sunshine. I call my mom or my best friends, or just look at the water. It’s a moment of peace in this wild and crazy industry and world. My time on the ferry really brings me a lot of peace, joy, and contentment that I feel I hadn’t had before.