Should We Let Admins Write Code With AI?
The Case for Guardrails Over Gatekeeping
Agentforce Vibes launched at TDX a few weeks ago. I used it while I was there to build a Lightning Web Component and deploy it to a scratch org. It's a genuine game-changer.
I came back to the UK and mentioned this, fairly casually, to a Salesforce developer I work with. His reaction stayed with me. He wasn't dismissive, and he wasn't talking about me specifically. He was uneasy about what it pointed to. "Admins are going to do this and ship it," he said. "They don't really understand what they're doing. They'll break production. This isn't what admin means."
I've heard versions of that same conversation for years now, ever since AI coding tools became a practical reality. It plays out in every Salesforce team I work with, usually over Slack or Teams, sometimes a little tensely. Admins and BAs have been discovering, first with GitHub Copilot, then ChatGPT and Claude Code, and now Agentforce Vibes, that these tools can generate working Apex classes, triggers, and LWCs. Developers have been, quietly or not so quietly, pushing back.
It's worth taking that pushback seriously rather than dismissing it. Because both sides have a point, and the honest answer isn't "AI democratises development, get with the programme" any more than it's "keep admins out of the code". The real question is whether the controls that already exist in mature Salesforce DevOps pipelines (branch protections, automated scans, test coverage gates, peer review) are robust enough to extend the circle of who can write code. My honest answer, after years of moving between programme management, admin and consulting work, is that it depends on the team. Some of the review processes I've seen are genuinely rigorous. Others are box-ticking exercises: a quick skim of a diff, a thumbs up, a test suite padded out to hit the 75% coverage gate. The framing matters, and so does what we actually mean when we say "access to code".
What's Actually Changed
The admin-versus-developer divide in the Salesforce ecosystem has always been artificial. The platform was built on a declarative-first philosophy that deliberately blurs the line. Admins have been writing formula fields with conditional logic, building Flows with loops and recursion, and configuring Einstein bots with branching conversation trees for years. That's all programming in everything but name.
What's new is that AI has collapsed the translation barrier between natural language and Apex. An admin can now describe a requirement, "when an Opportunity closes, check for open cases on the Account and reassign them to the CSM team", and get working, bulkified, governor-limit-aware Apex back in seconds. One published comparison reports that Claude Code generates Apex that compiles on the first attempt roughly 85% of the time, compared to around 60% for ChatGPT and 70% for GitHub Copilot on Salesforce-specific code (Clientell, 2026).
That's a real capability shift, and it's why the question has become urgent. The people writing about this from the admin side are enthusiastic: the narrative is framed as "Admin+", admins augmented by AI rather than replaced, with the tooling acting as a skill amplifier that lets them tackle work previously out of reach (CloudAnswers, 2026). The developer community's pushback is real too, and it's not just turf protection.
The Developer Concerns Aren't Wrong
Before making the case for extending access, it's worth airing the legitimate technical objections. They fall into four buckets.
Context blindness. AI coding tools don't reliably understand the org they're generating code for. As Salesforce Ben has documented, generative AI can suggest Apex methods and classes that don't exist, try to query objects and fields that haven't been created, and invoke JavaScript functions that were never written (Valencia, 2024). A developer spots this immediately because they know the codebase. An admin acting on trust might not.
The test problem. It's tempting to let AI write your test classes, but this is a risky approach. The same LLM that produced wrong output can produce wrong tests, and while AI can help initialise test data and suggest scenarios, a human developer ultimately must verify the test is valid (Valencia, 2024). An AI that writes code and writes the tests that pass that code is, as the Gearset team put it, AI marking its own homework (Gearset, 2026). That's a real problem regardless of who's at the keyboard.
Happy-path coding. AI tends to produce code that works for the scenario described and falls over on edge cases. As one Medium analysis of Salesforce vibe coding noted, a prompt may not capture all edge cases, and generated code may deliver a "happy-path" but break under production scenarios (Dhari, 2025). Bulk loads, mixed-DML exceptions, recursion, race conditions, governor limits at scale: these are the things that bite in production, and they're the things a seasoned developer has been burned by enough times to ask about.
Refactoring fragility. This one's particularly acute for LWCs. Research by the CodeScene team, cited in a Salesforce Ben article, found that JavaScript is particularly challenging to refactor due to nuanced syntax, with common patterns where AI makes the code incorrect, inverting boolean logic so that a && b becomes !(a && b), or moving code containing this into a separate function, causing this to lose its intended context (Valencia, 2024). An admin refactoring an LWC with AI assistance may ship a subtle behavioural change without ever realising.
None of these are reasons to lock admins out. But they are reasons why "admin + AI + Deploy to Production" without anything in between is a bad idea.
The Bigger Risk Is Gatekeeping
Here's where the other side of the argument gets interesting. The alternative to giving admins AI-assisted code access isn't a world where all code is written by disciplined, senior developers. It's a world where two things happen.
First, the work doesn't stop being needed. As Vofox's analysis of citizen developer governance puts it, citizen developers often deliver faster than IT can, not necessarily better, not more secure, not more maintainable, but faster. A sales ops person who understands the actual workflow can build a functional tool in a week; IT building the same thing properly takes three months. The fast version ships, people use it, and it becomes critical before IT even finishes requirements gathering (Vofox, 2026). Blocking admin access to AI-assisted code doesn't eliminate demand. It pushes the work into Flows that really should have been Apex, into unsupported third-party apps, or into personal copies of ChatGPT producing code that gets pasted into the developer console with no governance at all. That's worse, not better.
Second, it entrenches a bottleneck that's been a drag on Salesforce programmes for years. The developer resource is always the scarcest on a team. I've watched perfectly good business requirements sit in a backlog for two release cycles because the one Apex developer is busy untangling a trigger framework. Meanwhile an admin who can describe the requirement precisely, test it in a sandbox, and shepherd it through QA is standing right there. The argument that they shouldn't be able to use AI to generate a starting point for that work is, on reflection, hard to sustain.
The citizen development literature is blunt about the choice. Banning citizen development means accepting slower delivery; enabling it means accepting some risk. There's no option that's both perfectly safe and perfectly fast (Vofox, 2026). And as FlowDevs argues in its analysis of low-code governance, the solution isn't to lock down the platform and ban citizen development, doing so creates bottlenecks and stifles innovation. Instead, effective governance balances security with agility, creating a safe lane where users can build fast without crashing the car (FlowDevs, 2026).
What Good Guardrails Actually Look Like
This is where the article has to get specific, because "use guardrails" is the kind of thing everyone nods at and nobody implements. Here's what a credible admin-enabled, AI-assisted code pipeline looks like in practice.
Source control is non-negotiable. Everything lives in Git. Every change is a pull request. No admin, no developer, no one is deploying AI-generated code straight to production via a changeset. This is the single biggest guardrail and it happens to be the one that the Salesforce ecosystem has historically been worst at. If your org doesn't have this yet, the admin-AI question is premature. Solve Git first.
Automated static analysis on every pull request. PMD for Apex is free and a reasonable floor. More sophisticated tooling goes further: Gearset Code Reviews, for example, scans over 300 metadata types across Salesforce environments, Apex classes, triggers, test classes, Flow functionality, Visualforce, Lightning Components, permission sets, profiles, and field-level security, analysing both source code and configuration to identify issues across the entire project (Gearset, 2026). The point of this isn't to replace human review. It's to catch the things humans reliably miss and to do so before the reviewer's time is spent.
Deterministic quality gates, not AI-reviewing-AI. This is subtle but important. If your code was generated with AI assistance and then reviewed with an AI-based reviewer, you've created a loop with no external check. As DevOps Launchpad points out, non-deterministic AI-based review tools don't provide consistent results every time, so you can't guarantee that security and compliance standards are being followed. Without careful prompt engineering and validation frameworks, custom LLM solutions will miss some security issues or generate overconfidence in problematic code (DevOps Launchpad, 2026). Deterministic scanners, the ones that give you the same answer every time given the same input, are what you want at the quality gate.
Mandatory peer review by someone who can read the code. This is the guardrail that most protects against admin-AI risk. The admin can draft the change with AI. The admin can run the tests. But a developer, or another admin with appropriate technical judgement, has to approve the PR before it merges. This is not gatekeeping. This is the same standard developers hold each other to, and extending it to AI-assisted admin work is the obvious move.
Test coverage that means something. The 75% Apex coverage requirement is a floor, not a target, and it's easily gamed by AI-generated tests that assert nothing meaningful. A decent review process checks that tests actually exercise edge cases, negative paths, and bulk scenarios, not just that the coverage number is green.
Tiered approval based on risk. Not every change needs the same rigour. Vofox proposes a sensible tiered model: a development sandbox where anyone can build anything, department-level use with light automated review, enterprise deployment with formal IT and security validation, and production with external access requiring the full governance process. This lets people move fast on low-risk items while maintaining control over high-risk deployments (Vofox, 2026). A utility class that formats phone numbers doesn't need the same scrutiny as a trigger that touches billing data.
Operational tooling that AI can't replace. Even the best AI coding agent has limits. Clientell's comparison testing notes that Claude Code can write an Apex batch job, but it cannot execute it, monitor its progress, handle errors in real time, or roll back if something goes wrong, data operations at scale require tooling that manages the full lifecycle (Clientell, 2026). Knowing where the AI stops and the disciplined deployment tooling starts is a skill admins need to develop as they take on more code-adjacent work.
Is This Developers Protecting Their Skill Set?
Let's address the question directly, because it's the one that the article title is really asking.
Some of it, yes. Not maliciously, most developers I've worked with are generous with their knowledge, but it's human nature to feel uneasy when the things that took you years to learn become accessible to someone who arrived last week with a good prompt. That's a real feeling and it deserves acknowledgement rather than dismissal.
But dig into the specific objections and most of them aren't really about skill-set protection. They're about accountability. A developer who ships broken Apex owns it. They wrote it, they tested it (or didn't), they know what went wrong. When AI writes code and an admin ships it, the accountability chain is murkier. Who reviewed the edge cases? Who understood the governor limit implications? Who's on the hook when the trigger recurses in a bulk load?
The guardrails answer isn't "trust the admin" or "trust the AI". It's "build a process where neither has to be fully trusted, because the process catches mistakes before production does". That's actually the standard we should be holding developer-written code to as well, and in mature shops, we already do.
Where the developer concern is most legitimate is around the erosion of deep platform understanding. It's entirely possible for admin+AI workflows to produce working code without the person shipping it understanding why it works. Over time, that builds a team with a thin bench of people who can debug something gone wrong. This is a real cost, and it argues for pairing AI-assisted admin coding with deliberate learning (Trailhead trails, code review walkthroughs, pairing sessions) rather than treating it as a pure productivity win.
The Honest Conclusion
Where the guardrails are real rather than ceremonial, extending code-adjacent work to admins, augmented by AI, is a net benefit. It expands delivery capacity, reduces the developer bottleneck, and, crucially, doesn't reduce safety if the DevOps foundations are solid. The risks developers raise are real, but they're risks of bad process, not risks of admins-with-AI specifically. A senior developer shipping AI-generated code to production via a changeset with no review is doing something more dangerous than an admin raising a well-scoped PR through a mature pipeline.
The honest version of the argument looks like this. If your Salesforce org has Git-backed source control, automated scanning, mandated peer review, and tiered approval paths, then giving admins AI-assisted code access is a benefit and the developer concerns are largely addressed by the process. If your org doesn't have those things, the question isn't whether admins should be writing code with AI. It's whether anyone on your team should be, because the guardrails that protect you from AI's failure modes are the same ones that protect you from human ones.
That reframing, from "who gets to write code" to "what does our pipeline catch", is the shift the whole ecosystem needs to make. And it's one admins, developers, and architects all have a stake in getting right.
References
Clientell. (2026, April 1). Claude Code for Salesforce admins: What it can and cannot do. https://www.getclientell.com/salesforce-blogs/claude-code-salesforce-admin
CloudAnswers. (2026, February 12). Agentforce Vibes: Can we really use AI to write code as Salesforce admins? https://cloudanswers.com/blog/agentforce-vibes-can-we-really-use-ai-to-write-code-as-salesforce-admins
DevOps Launchpad. (2026, January 26). The best Salesforce code review tools in 2026. https://devopslaunchpad.com/blog/best-salesforce-code-review-tools/
Dhari, G. C. (2025, October 22). Salesforce vibe coding. Medium. https://medium.com/@gadige.sfdc/salesforce-vibe-coding-f88d525a56d5
FlowDevs. (2026, January 3). Sleeping soundly: Why citizen development needs guardrails. https://www.flowdevs.io/blog/post/sleeping-soundly-why-citizen-development-needs-guardrails
Gearset. (2026). Salesforce code review automation solution. https://gearset.com/solutions/code-reviews/
Valencia, E. (cited in Salesforce Ben). (2024, November 13). Can AI refactor your Salesforce code successfully? Salesforce Ben. https://www.salesforceben.com/can-ai-refactor-your-salesforce-code-successfully/
Vofox. (2026). Governance for citizen developers: Avoiding shadow IT with proper guardrails. https://vofox.com/governance-for-citizen-developers/
Why I’ve Changed How I Design Salesforce Builds
I spent last week at TrailblazerDX in San Francisco, and by the time Salesforce wrapped the Headless 360 keynote it was hard to miss the direction of travel. The platform most of us have designed for (Lightning, page layouts, the browser-based world) is being quietly rearchitected to be one surface among many. Headless 360 is the architectural umbrella: the whole platform accessible via APIs, MCP tools and CLI commands (no browser required), with a new experience layer for rendering agent interactions wherever users actually work (Salesforce Ben, TDX 2026).
None of this came out of nowhere. A fortnight earlier, Parker Harris, the co-founder who built the Lightning UI, had stood on a stage in San Francisco and asked out loud why anyone should still log into Salesforce (SalesforceDevops.net, 31 March 2026). Salesforce had just unveiled more than thirty new AI capabilities for Slackbot and was openly positioning Slack as the place where enterprise work actually happens (SiliconANGLE, 31 March 2026). TDX gave that direction an architectural backbone.
I've been running Salesforce programmes long enough to tell the difference between a real product shift and a new coat of paint. This is a real shift. And it's going to change how I approach every new build.
Where I've landed
I don't think the Lightning desktop UI is going away. Anyone who tells you otherwise is either selling something or hasn't watched real users get real work done in it. But I do think it's quietly moving from the interface to one of several, and for a growing number of users, it's no longer the primary one.
A field sales rep closing out their week isn't opening a laptop. A service user logging an IT request isn't filing a ticket in a portal. Salesforce itself has called time on portal-driven ticketing and is pushing customers toward conversational, agent-led experiences across Slack, Teams, email, web and voice (Salesforce Newsroom, 26 February 2026). An admin setting up OAuth scopes isn't clicking through five screens any more. They're having a conversation with Agentforce for Setup.
The canvas has moved. My approach has to move with it.
The evidence isn't subtle any more
A year ago I could have written this as speculation. I can't now.
The piece of Headless 360 that's caught my attention most is what Salesforce calls the Agentforce Experience Layer, the rendering component of the new architecture. It lets you define an interaction once and have it render across Slack, mobile, Teams, ChatGPT, Claude, Gemini, anything that speaks MCP. The pitch from Salesforce is literally build once, render everywhere (Salesforce Ben, TDX 2026). Slackbot has become an MCP client in its own right, coordinating across thousands of apps in the Slack Marketplace and on AppExchange, and from this summer Slack will be provisioned automatically with every new Salesforce customer (The Next Web, April 2026).
The number that's stuck with me: one enterprise customer whose Agentforce adoption jumped from 22% to 78% in six weeks. The agent itself didn't change. Only the surface it was offered on (SiliconANGLE, 15 April 2026). That's the clearest sign I've seen that the interface question has become properly separable from the capability question.
None of this means Lightning is done. It just means Lightning has company.
What's changed in how I design
The old default on a Salesforce programme went roughly: persona → process → object model → page layouts → automation → reports. Page design sat in the middle and pulled everything into its orbit. You could point at a screen and say "this is what the user does." When half the user base doesn't look at a screen any more, that method falls over.
Here's what I'm doing differently.
I start with intents, not pages. The first artefact on my current programme isn't a screen inventory. It's a list of user jobs. Confirm the competitor on a deal. Work out why an account's slipping. Log an expense from the road. Only once the intent is clear do I ask: on which surface is this best served? Some want a desktop page. Some want a Slack command. Some want a conversation. Some want all three, and I design them to behave consistently.
I make actions surface-agnostic. An "update opportunity stage with rationale" action has to be the same action whether it's invoked from a Lightning button, a Slack message or an Agentforce topic. Reusability across surfaces isn't a nice-to-have any more. It's the starting point.
I take Data 360 seriously, and earlier. Salesforce has rebranded Data Cloud as Data 360 and put it squarely at the centre as the unified data layer that gives every agent context (Salesforce Investor Relations, Dreamforce 2025). Independent analysis is consistent on this: Data 360 is what retrieval-augmented agents actually ground their answers in (360 Degree Cloud, December 2025). A half-populated Account record on a Lightning page is ugly. The same record feeding an agent produces a confidently wrong answer. Data quality has moved from a hygiene concern to an architectural one.
I treat prompt templates, topics and actions as first-class artefacts. Same rigour I apply to flows and page layouts: versioned, reviewed, tested, governed. The temptation to treat them as lightweight config is strong. They're closer to code.
I've changed how I think about testing. You can't regression-test an agent the way you can a page. Salesforce has been honest about this: agents are probabilistic, not deterministic, and can land on unexpected outcomes that are behaviour to observe rather than bugs to fix (Salesforce Ben, TDX 2026). Testing Center, Observability and Session Tracing are the new instruments. Most teams I'm seeing, partners included, are still catching up to what probabilistic QA actually means for their delivery lifecycle.
What hasn't changed, and now matters more
One thing I'm seeing in the market is an assumption that the platform fundamentals matter less now. If anything, they matter more.
The object model still has to be right. An agent will expose a broken data model faster and more publicly than a Lightning page ever did. Sharing and security still have to be right, and it's harder now because architects are reasoning about agent identity as well as user identity. When an agent acts on a user's behalf, whose permissions apply? What can it see that the user can't? These are live questions, and they don't have clean answers yet.
Integration patterns, governance and change management are all more load-bearing, not less. The inversion is that the work we used to spend most of our time on (screens) is getting distributed across surfaces, while the work that was often underinvested in (data model integrity, data quality, security, governance) has quietly become the foundation everything else rests on.
Where I'd push back on the hype
Let me be honest about where I think the current narrative overshoots.
Not every interaction belongs in an agent. Agents are bad at some things: deep multi-step workflows with strong audit requirements, mass data operations, pixel-precise reporting, anything where a user needs to see and manipulate a lot of structured information at once. Lightning exists for good reasons and a lot of them are still valid.
It's worth listening to the skeptical voices too. The Register has pointed out that the vision of Slackbot as a single interface across every enterprise application rather glosses over how complex those systems actually are, and how hard it is to keep the underlying data current and well-governed (The Register, 2 April 2026). Salesforce reported combined Agentforce and Data 360 ARR of around $1.4 billion in Q3 FY26. Meaningful, but still a fraction of the overall business, and the bar for what counts as a successful deployment varies wildly (Futurum Group, December 2025).
The failure mode I'm watching for isn't "we didn't do enough with agents." It's "we turned every interaction into a conversation and alienated the users who just wanted a grid."
How we should be changing our approach
For anyone scoping or delivering a Salesforce programme right now, the practical implication of all this is that the surface question belongs in scope from the first release, not deferred to a later phase once "the core is live." That shift has a few concrete knock-ons I think we should be planning for from the first workshop:
Document user journeys across surfaces, not as single-screen flows. The same intent will often surface in Lightning, in Slack, and in an Agentforce conversation, and our designs need to anticipate that.
Design core capabilities to be invocable from any surface (Lightning, Salesforce Mobile, Slack, Agentforce) with the same action, data model and security underneath. Reusability across surfaces becomes the starting point, not a later optimisation.
Treat Data 360 scope and data quality as go-live blockers rather than enhancements. If agents are anywhere on the roadmap, the data foundation has to come first.
Extend governance to cover topics, agents and prompt templates alongside profiles, permission sets and sharing rules. The governance model most programmes run today simply doesn't cover these new artefacts.
Build evaluation and behavioural testing into test plans for anything an agent will touch, not just the deterministic flows. Probabilistic QA is a new discipline for most of us.
None of this is radical in isolation. What's different from even a year ago is that all of it needs to be day-one design, not something we sort out after go-live.
Where this leaves us
The line that's stayed with me through all of it is still Harris's. The person publicly questioning the Lightning UI was the person who built it. That kind of candour is unusual in this industry, and at TDX it became clear there's an architecture being built behind the rhetoric.
We're not heading into a world without the Salesforce UI. We're heading into a world where the UI is one of several interfaces to the same platform, and where the central design question is no longer "what goes on the page" but "what intent are we serving, and on which surface?" The platform disciplines get more important. The screen disciplines get more distributed.
For those of us designing and delivering these programmes, the work isn't getting easier. It's getting more architectural, and a lot more interesting. The teams that'll do well over the next two or three years are the ones who stop treating the desktop UI as the default and start designing for a multi-surface world from the first workshop.
The Lightning page isn't dead. It's just no longer where my design conversation starts.
References
Parker Harris Just Told You to Stop Logging into Salesforce. SalesforceDevops.net, 31 March 2026. https://salesforcedevops.net/index.php/2026/03/31/parker-harris-stop-logging-into-salesforce-slackbot-march-2026/
Salesforce transforms Slackbot into the ultimate work assistant with 30 new AI features. SiliconANGLE, 31 March 2026. https://siliconangle.com/2026/03/31/salesforce-transforms-slackbot-ultimate-work-assistant-30-new-ai-features/
Salesforce Headless 360 and Agentforce Vibes 2.0 Revealed at TDX 2026. Salesforce Ben, April 2026. https://www.salesforceben.com/salesforce-headless-360-and-agentforce-vibes-2-0-revealed-at-tdx-2026/
Salesforce Targets ITSM: 180 Organizations Adopt Agentforce IT Service. Salesforce Newsroom, 26 February 2026. https://www.salesforce.com/news/press-releases/2026/02/26/agentforce-it-service-selected-for-itsm/
Slack's biggest AI update turns Slackbot into a desktop agent. The Next Web, April 2026. https://thenextweb.com/news/slack-slackbot-30-ai-features-agentic
Salesforce bets on conversation as the new interface for developers. SiliconANGLE, 15 April 2026. https://siliconangle.com/2026/04/15/salesforce-bets-conversation-new-interface-developers/
Welcome to the Agentic Enterprise: With Agentforce 360. Salesforce Investor Relations, Dreamforce 2025. https://investor.salesforce.com/news/news-details/2025/Welcome-to-the-Agentic-Enterprise-With-Agentforce-360-Salesforce-Elevates-Human-Potential-in-the-Age-of-AI/default.aspx
How Salesforce Data 360 Fuels Context-Aware AI Agents. 360 Degree Cloud, December 2025. https://360degreecloud.com/blog/how-salesforce-data-360-fuels-context-aware-ai-agents/
Salesforce looks to Slackbot to help solve SaaSpocalypse. The Register, 2 April 2026. https://www.theregister.com/2026/04/02/salesforce_slack_update/
Salesforce Q3 FY 2026: AI Agents, Data 360 Lift Bookings. Futurum Group, December 2025. https://futurumgroup.com/insights/salesforce-q3-fy-2026-ai-agents-data-360-lift-bookings-and-fy26-outlook/

