Companies across all business verticals are asking some hard questions about Agentic AI and rightly so. The past week in the Prompt Economy was defined by the answers to those questions as thought leadership pieces articulated the promise, anxieties, and in some cases, skepticism around agentic AI.
Liquid, an offshoot of MIT, posed one of the most dramatic questions. Which prompted the VentureBeat readership: “What If We’ve Been Doing Agentic AI All Wrong?” AI In a whirlwind day for foundation models of all shapes and sizes, Liquid AI announced a new class of models dubbed “Nanos,” small systems that can be as small as 350 million parameters that the company claims can achieve near GPT-4o quality at narrow tasks.
Nanos are meant to run right on phones, laptops, and embedded devices with minimal cloud infrastructure, unlike today’s giant frontier models that need expensive cloud capacity. This shift, the company suggests, could reduce the costs and energy usage of AI by several orders of magnitude, enabling the scale from millions of agentic AI applications across consumer, enterprise, and public-sector domains, to billions.

It is currently available in a limited capacity, with six task-specific models designed for both high-demand and high-impact use cases: English-Japanese translation, structured data extraction, mathematical reasoning, and retrieval-augmented generation and tool calling. Initial benchmarks indicate the models can do significantly better than considerably bigger open-source systems—and potentially come close in performance to frontier-level systems.
For businesses, that might translate into quicker, more affordable AI implementations in fields such as finance, e-commerce, healthcare, and automotive, where speed-to-market, cost-effectiveness, and data privacy very often rank as crucial as raw performance.
As Liquid AI’s chief executive, Ramin Hasani put it, the breakthrough isn’t in size, but in architecture: “Nanos flip the deployment model. We ship intelligence to the device rather than shipping every token to a data center. That opens up to speed, privacy, resilience, and a cost profile that actually is scalable to everyone.”
In something much more on the techy side, Microsoft is reiterating what it’s taking to get companies to get its tech stack. Microsoft laid out a design framework for building enterprise-ready AI agents capable of operating securely and seamlessly across organizations in its latest blog post, which is part of its Azure AI Foundry series, and detailed what the company describes as the “agentic web stack.”
In a blog, this is characterized as an inflection point similar to the standardization of the internet, with the argument being that agents require common protocols, discovery systems, trust frameworks, orchestration tools, and governance to be able to scale responsibly.
Eight major elements of this function stack include Model Context Protocol (MCP) and Agent-to-Agent (A2A) standards for communication, registries of active agents, and memory services so that agents also learn over time from interactions.
It also states that the stack’s worth here is practical rather than theoretical. Use cases include end-to-end automation of business processes, supply chain synchronization across corporate boundaries, knowledge worker augmentation, and memory-driven customer journeys, Microsoft pointed out.
In every single instance, the promise remains faster cycle times, lesser manual intervention, and greater trust through observability and governance. Microsoft wants Azure AI Foundry to be the enterprise platform for scaling agents without the risk of throwing them out as opaque, ad hoc deployments by embedding identity management, compliance, and telemetry into the agentic stack.
Similar to how HTTP and TCP/IP standardized the internet, this stack creates the common services and protocols that are necessary for multi-agent ecosystems to be secure, scalable, and interoperable across organizational boundaries, say authors Yina Arenas and Ulrich Homann.
Last week, Amazon also released a white paper on its identity-focused tech stack. Amazon and SailPoint Launch Harbor Pilot, the First Identity Security Platform to Bring Agentic AI to Enterprise Compliance and Access Management, Powered by Amazon Bedrock.
Harbor Pilot combines a fleet of intelligent agents driven by AI machines and deep learning to handle document queries, dynamic, real-time workflow-building, and on-demand identity issues resolution.
Administrators can now define complex workflows in just a matter of minutes and get immediate answers to compliance questions instead of waiting on cycles of manual processes or lengthy support turns. The solution is deployed on AWS architecture, where all the services are integrated, including Elastic Kubernetes Service, OpenSearch, CloudWatch, etc., based on a security-first design to deliver scalable & reliable governance for enterprises.
Since its launch in March 2025, its take-up has been rapid. 30-day onboarding: Average of half of customers onboarded within 30 days. Early users reported reductions in workflow creation times from days to minutes and moved from multi-week ticket resolution to instant query responses.
Enterprises view Harbor Pilot as a force multiplier: automating mundane identity tasks, but supporting the controls compliance needs. Upcoming features are natural language access requests, personalized analytics, and session history, extending its function as a mind-aware helper integrated into SailPoint Identity Security Cloud.
In the words of one customer testimonial, “The workflow builds have been really impressive. It could assemble an elaborate workflow in a few minutes, which had previously taken hours to do manually.”