Back in the 1970s, executives could hide behind the rule, “Nobody Gets Fired For Buying IBM.”
Decades later, the same logic fueled the rise, first of Salesforce, and later of all the other cloud companies. CIOs embraced SaaS to avoid upfront software costs and infrastructure risk. Each generation of technology offered multiple paths and low-risk choices. Today, because of AI innovation dynamics, creating applications on the wrong stack could lead to not just to job loss within the software team but career loss for the decision-maker.
At OpenAI Dev Day, President Greg Brockman made it clear that AI is not only transforming IT jobs but also destroying the notion of safe platform choices. Platform risk and job risk now converge on CTOs making technology choices, and on the developer teams struggling to keep pace with AI developments.
For decades, choosing a dominant platform felt prudent. Interfaces stabilized, skills lasted, and contracts spread risk. That world is disappearing along with the lofty valuations of SaaS favorites like $CRM.
As one meme puts it, “Every time OpenAI has a Dev Day, a thousand startups die.” Greg Brockman confirmed the logic behind that disruption: “We have to choose where to focus. We pick domains where there’s synergy and where we can add real value.”
“OpenAI just killed 1,000,000,000,000,000 startups.” That tweet from the Dev Day floor summed up what many felt as the new Agent Builder demo played. It sounded like the applause of a startup apocalypse.
That synergy filter defines where AI hits first — the domains that are economically valuable, data rich, and computationally feasible. Finance, healthcare, and software development already meet those conditions. Insurance software solutions, with structured data and measurable ROI, will follow.
The executives at risk are not those who took chances but those who relied on “safe” platforms while the ground shifted beneath them.

Judgment and taste: Retain decision-makers who know what good looks like. AI can generate endless options, but only humans can decide what aligns with your brand, compliance framework, and customer trust. As Steve Jobs once said about Microsoft, “They have no taste.” Technology without taste is directionless. Keep this human layer inside; it defines quality and meaning when everything else accelerates.
Embedded collaboration readiness: Be ready to host an embedded software team from your partners, much like Anthropic’s embedded teams that work inside client systems. These teams operate within your environment to evolve processes rather than replace them. Security, access control, and workflows must enable this collaboration. The ability to onboard partner teams quickly will determine how fast you adapt.
Trust and accountability: Assign clear ownership for every AI-assisted process. Someone inside must validate outputs, govern access, and maintain explainability and interpretability. When the tax attorneys knock on your door, they will not settle for arresting the AI. More likely, they will get all the logs on everything regarding your tax practices from OpenAI.
Flexible CI/CD teams: Work with partners who can match your internal DevOps velocity. These teams adjust frameworks, pipelines, and tests as platforms evolve, ensuring continuity while your teams focus on oversight and EU regulatory compliance.
Modular architecture: Use patterns that isolate vendor scope — API gateways, orchestration layers, function calling, retrieval, and evaluation. This structure makes pivots manageable instead of disruptive.
Insurance and trust expertise: Choose partners who understand underwriting, claims, and compliance. Capabilities such as e-signature, KYC, and trusted archiving are core to digital trust platforms, not add-ons. Some SaaS companies are on an AI notice and committing to a flexible partnership is important.
When it comes to enterprise partnerships and Embedded Teams, TINQIN operates in this mode with insurance technology experts who combine DevOps on AWS and Azure, modular architecture, and regulatory precision to deliver adaptive, production-ready systems.
Compute-first mindset: In the past, software development was limited by one factor, labor: the capacity and capability of your teams determined outcomes. The AI race, however, is about compute. OpenAI is diversifying, signing supply and hosting agreements with NVIDIA, AMD, and Oracle; quite separately from Microsoft Azure (legacy) and CoreWeave (training, peaks). Between just these three providers, they have over a TRILLION in commitments for compute (500, 300, 300 billion respectively). Compute is no longer a secondary concern for the IT department; it is a strategic dependency that affects all areas of the software development lifecycle, but especially deployment speed and model capability.
Cloud-agnostic readiness: Build infrastructure that avoids vendor lock-in while maintaining compliance. For insurers and financial institutions, that often means hybrid or multi-cloud configurations with sovereign EU hosting for sensitive data.

Greg Brockman’s 2026 outlook provides a clear lens on what happens next. AI will move first where the economic value is the highest. Disruption begins not with simple tasks but with the most profitable ones.
Software development is already transforming, followed by finance, healthcare, and insurance. For example, underwriting, claims, fraud detection, and customer onboarding and service combine dense data with measurable ROI and are prime for AI disruption.
Platform risk and job risk now amplify each other. The cadence of change is external, but the opportunity for lasting competitive advantage exists: it matters how you structure teams, architecture, and partnerships. The companies that will stay ahead are not those predicting the next platform, but those ready to move the moment it arrives.
Excerpt from Matthew Berman’s conversation with Greg Brockman, President of OpenAI, recorded at OpenAI’s San Francisco offices following the company’s annual developer event, OpenAI Dev Day.
Is my job in danger? (29:52)
Matthew: MrBeast said AI is a threat to content creators’ livelihoods, which is my job. What do I have to be worried about?
Greg Brockman: It’s true that AI is going to change a lot of jobs. Some will be totally transformed or disappear, and others we can’t even imagine yet will appear. We’re changing fundamentals of the social contract. I think we’re moving toward a world of abundance, where quality of life is high even if you’re not economically working. If you’re striving and building, there will be much more to gain and to create.
No one knows exactly what lies beyond the AI event horizon, but it’s going to be stranger and probably more delightful than we can imagine.
Matthew: I just started my job, so I’d like to keep it.
Greg: Things built on human connection will be hard for AI to replace. And skilled trades like plumbers and electricians are already in short supply and not easily automated.
Platform risk and building on OpenAI (31:53)
Matthew: We’re here at your developer event, with a room full of developers. You just announced Agent Kit. How should developers building on OpenAI think about platform risk? I’m sure you’ve heard this before, but the meme is, “Every time OpenAI has a Dev Day, a thousand startups die.” I don’t believe that, but how do you draw the line between what you build and what you leave for others?
Greg: We think about that a lot. It’s important to us. We want to help transition the world to an AI-first economy that uplifts everyone, and we can’t do that alone. We rely on developers to connect this technology to the real world.
But we have to choose where to focus. We’re a few thousand people in a huge global economy, so we pick domains where there’s synergy and where we can add real value. Coding is one example; doing it well speeds up our own work too. We aim to amplify as many builders as possible, then go deep in areas that strengthen the ecosystem.
Humans in the loop and AI alignment (34:12)
Matthew: As models improve, humans are still involved at the start (prompting) and end (verifying). How long will that last?
Greg: The purpose of this technology is to benefit humans… and really, all living beings capable of joy. We don’t want a future where humans have to hand-craft prompts or engineer contexts. Those are legacy mechanics. The machine should move closer to the human, understand your goals, and help you achieve them. That’s what uplifting humanity means.
Fully generated software and the future of development (35:40)
Matthew: Will software eventually be fully generated, every pixel, every function, in real time?
Greg: I think so. It’s mind-bending to imagine a fully generative UI. What if there are no buttons? Much of our current design exists because of legacy operating systems. If you reimagine from scratch, with no legacy code, it would look totally different, and probably surprising.
Matthew: In that world, are there still developers?
Greg: Humans will still be involved, just differently. Look at Sora: it’s generative video, but the human presence still matters. When people used image generation earlier this year, the most engaging results had some human grounding: a pet, a family photo. Without that, it was boring. The human connection is what makes it interesting.
I think it’ll be the same for software. People will imagine systems and delegate to AI developers that build them. What will matter most are judgment and taste. Knowing what you want, and having taste, are the real differentiators.