How ex-OpenAI VP’s Amodei siblings found the $4B AI Company Anthropic

Share this story

Table of contents

+

If you’re someone who’s always up to date with the latest AI progressions, you definitely have heard of Cluade 3, which is an AI model developed by Anthropic that surpasses GPT-4 and Gemini Ultra in performance, sparking discussions about AGI (Artificial General Intelligence).

We will discuss more on Claude 3 later, but an interesting fact is that Dario Amodei and Daniela Amodei, who are founders of Anthropic, were the Vice Presidents of OpenAI, with Daniela heading the Safety unit and Dario leading the research. However, they parted ways with the company three years back to build their own AI models, keeping safety at the frontier of innovation. 

So, Anthropic is an artificial intelligence research company focused on developing and understanding advanced AI systems, with a special emphasis on safety and interpretability. Anthropic's approach is centered around creating reliable, interpretable, and steerable AI, aiming to mitigate the risks associated with powerful AI systems.

But let’s start from the beginning of the journey of the Amodei siblings. 

Dario Amodei Educational & His Early Days

When you look at Dario’s professional journey, he seems nothing short of genius. Dario has mentioned in one of his interviews that his journey to founding Anthropic, started from a childhood fascinated by the objectivity of math, which contrasted with the subjectivity of opinions. 

Growing up with a younger sister, one of the co-founders, they aspired to impact the world significantly. Daro’s transition from a math enthusiast to a physics major, and then a shift towards biophysics and computational neuroscience in grad school, was driven by an interest in AI's potential, sparked by Ray Kurzweil's (a futurist) ideas on computational acceleration. 

Dario went to Caltech and Stanford for his bachelor's in Physics. Yes, his LinkedIn page mentions that he did his BS in Physics from Caltech in 2003 and another BS in Physics. 

He further did his PhD in Computational Neuroscience at Princeton University, after which he served as a postdoctoral fellow at the School of Medicine at Stanford University.

Dario’s first proper job seems to be as a consultant at Applied Minds. Applied Minds is a technology and design company that creates innovative products and services and has led to over 1,000 patents. 

After his postdoctoral days at Stanford, he worked as a software developer for Skyline, Targeted Proteomics Environment, and joined the Chinese internet company Baidu as a research scientist, where he started working on AI. 

Source: Vietnam Business

Working at Google and Open AI

Despite “feeling late to the field” (Dario’s words) in 2014, he joined the AI research community, at Google Brain. Dario worked as a Senior Research Scientist at Google, where he contributed to Google Brain as a deep learning researcher. After all that experience in artificial intelligence development, he finally landed on the world-class team of OpenAI after quitting his job at Google, driven by the purpose of doing safe AI research. Dario also assisted in building the GPT-2 and GPT-3 LLMs.

His foundational work at OpenAI, including developing GPT models and reinforcement learning from human feedback, underscored the importance of scaling AI systems while ensuring their safety and alignment with human values. 

This dual focus became the core of Anthropic's mission. Due to disagreements about safety at OpenAI and the company's directions, Dario left OpenAI in 2020, focused on the belief that increasing computing improves models and that model alignment or safety is crucial beyond just scaling.

The decision to leave OpenAI and start Anthropic with a group of colleagues was motivated by a shared belief in prioritizing safe AI development within a dedicated organization.

Anthropic, founded as a for-profit public benefit corporation, initially focused on research, keeping future commercial activities in mind. The founding group's collective vision emphasized AI safety from the start, aiming not just for commercial success but also to set a positive standard in AI.

Source: Tech Crunch/Anthropic Logo

Today, Dario Amodei is the CEO of Anthropic and an active advocate for responsible innovation that aligns with our principles and values. 

Daniela Amodei’s Unconventional Background

Unlike Dario, his younger sister Daniela does not have a background in science and research. Instead, she earned her Bachelor's in English Literature, Politics, and Music from the University of California. 

However, her journey in Silicon Valley started when she started working at Stripe, an online finance and payment startup. She worked there for five years in several positions and left as a Risk Manager in 2018.

Source: Linkedin (Daniela Amodei)

Her background includes work in global health, politics, growth, and team management at Stripe (from 40 to 1200 people), and team management at OpenAI before co-founding Anthropic with six other co-founders.

She is currently the president of Anthropic, focusing on developing transformative AI systems that are safe for human use.

Finding Anthropic

In the wake of their work on projects like GPT-3 at OpenAI, the co-founders of Anthropic recognized the increasing power of AI models and the accompanying risks. They believed that a focused effort on developing AI in a way that is aligned with human intentions and can be reliably controlled was necessary.

It was founded in 2021, and its first significant public funding round was its Series A, announced in January 2022, where it raised $124M. The round was led by Skype co-founder Jaan Tallinn's investment firm, with participation from James McClave, Oliver Cameron, the Center for Emerging Risk Research, and others.

Funding Raised by Anthropic 

By the end of 2022, Anthropic secured a total of $700M in capital, with $500M contributed by Alameda Research. This was complemented by a $300 million investment from Google Cloud for a 10% ownership, contingent on Anthropic's agreement to procure computing services from Google Cloud. 

In May 2023, the company successfully completed a funding round that brought in an additional $450 million, spearheaded by Spark Capital.

Anthropic’s Major Breakthroughs: Claude and Constitutional AI

Anthropic developed its AI chatbot, Claude, offering a messaging interface for users to ask questions and receive in-depth answers, akin to ChatGPT. Initially launched in a closed beta with Slack integration, Claude later became available on claude.ai. 

The name "Claude" possibly honors Claude Shannon, or offers a male counterpart to the female-named AI assistants by other companies. The second version, Claude 2, introduced in July 2023 in the US and the UK, emphasized safety, employing "Constitutional AI" for ethical training. 

This approach integrates principles from the 1948 Universal Declaration of Human Rights and Apple’s terms of service, advocating for freedom and equality. Claude 2.1 followed in November 2023. 

Research comparing AI models on financial information interpretation highlighted Claude's competitive performance. Claude 3 debuted on March 4, 2024, advancing the concept of "Constitutional AI" (CAI) to ensure AI's alignment with human values, emphasizing helpfulness, harmlessness, and honesty. CAI employs a "constitution" of high-level principles guiding AI behavior training models to prevent harm and ensure accuracy.

‍

‍

Anthropic’s $4B Valuation

The company has impressively garnered $2.8 billion in investments, with backing from technological giants like SAP, Zoom, Google, Amazon, and others. 

This influx of funding has elevated Anthropic's valuation to an impressive $4.1 billion, underscoring its increasing prominence and signaling a bright future ahead.

Claude 3 Taking Over The AI World

Moreover, Claude 3, as an AI tool, is insane!!! Claude 3 includes three versions: Haiku, Sonnet, and Opus, with Haiku impressively outperforming larger models in coding tasks. The models excel in benchmarks, particularly in "common sense" scenarios, but Claude falls short in math compared to Gemini Ultra.

It also demonstrates superior coding capabilities, handling complex tasks and maintaining context, especially with obscure libraries and Next.js applications.

Moreover, Claude3 can perform some really cool tasks like fast document analysis, customer service, data structuring for documents, economic analysis, and more. 

‍

‍

Anthropic’s AI Philosophy 

It’s important to talk about intentions whenever we discuss any AI company. On the face of it, all AI companies and executives preach conscious innovations, but you can never tell the actual motivation behind it all. 

However, Anthropic’s beginnings, at least, do seem to be motivated by curiosity and responsibility. AI is changing the world, and that’s why it’s always important to discuss the ethics involved in it. 

Concerns about AI

Amodei wishes to inform the public about two main concerns regarding AI: misuse by individuals and the autonomous actions of AI systems themselves. He points out the enhanced capability of AI to integrate knowledge, potentially enabling less skilled individuals to undertake harmful actions with profound implications. 

Source: Dario Amodei at Tech Crunch Disrupt

The long-term worry is about AI systems acting independently in ways that could be challenging to control or predict.

Institutional Frameworks for Safety and Ethics

Anthropic has adopted several measures to ensure the ethical development of AI. Incorporating as a public benefit corporation allows the company to prioritize ethical considerations alongside profitability. 

Establishing a Long-Term Benefit Trust and a commitment to Constitutional AI ensures that development aligns with explicit ethical principles, responsibly addressing present and future challenges.

By defining AI Safety Levels, Anthropic aligns model development with necessary safety measures, promoting a culture of cautious progression both within the company and across the industry.

Optimistic View of AI

With all the valid concerns around AI, you will notice that many leaders still have an optimistic view of it and see artificial intelligence as a force that humanity should not fight against. 

The vision of an ideal future often includes conquering diseases and eliminating fraud, which are universally desired outcomes. However, the prospect of achieving these goals by 2030 raises questions about the role of superhuman AI. 

Dario Amodei cautions against viewing AI as a singular solution for humanity's challenges, emphasizing the importance of decentralized decision-making and the diverse interpretation of a good life.

Balancing Secrecy and Openness in AI Development

At Anthropic, a selective approach to information sharing is deemed crucial, especially for developments that could be valued in the billions. This philosophy extends throughout the company, emphasizing that knowledge of certain 'secrets' is restricted to a few individuals directly involved, plus key leadership figures like the CEO. This compartmentalization strategy, while limiting the free flow of information, prioritizes protecting sensitive data, allowing for openness wherever possible.

Conclusion

Anthropic’s journey from an initial focus on research to embracing commercial opportunities reflects a balanced vision for responsible AI scaling. Anthropic's success lies in its commitment to safety, ethical considerations, and the belief that technological progress can coexist with human values.

It’s also interesting that Amodei discusses the unique aspects of his role as CEO, splitting his time between standard operational responsibilities and engagements that are unusual for a startup CEO, such as testifying before Congress. 

This blend of activities reflects the unique challenges and responsibilities of leading an AI development company focused on ethical and safe advancement in the field.

One cannot help but appreciate how the Amodei siblings are so vocal about AI and its safety, inspiring all young innovators, entrepreneurs, and tech enthusiasts. 

Share this story
Back to playbooks