Summary
What does it mean for AI safety if this whole AI thing is a bit of a bust? “Is this all hype and no substance?” is a question more people have been asking lately about generative AI.
The fundamental case for AI safety doesn’t depend on the AI hype of the last few years. Many of the technologists working on large language models believe that systems powerful enough that these safety concerns go from theory to real-world are right around the corner. They might be right, but they also might be wrong.
I expect that AI will still transform our world — just more slowly. A lot of ill-conceived AI startups will go out of business. People will continue to improve our models at a fairly rapid pace.
If we don’t get superintelligence in the next few years, I expect to hear a lot of “it turns out we didn’s need AI safety” If you’re an investor in today’S AI startups, it deeply matters whether GPT-5 is going to be delayed six months. We should think about how we’ll approach such systems and ensure they’ve been developed safely.
If you share our vision, please consider supporting our work by becoming a Vox Member. Your support ensures Vox a stable, independent source of funding to underpin our journalism.