Emily Todd Emily Todd

Should We Let Admins Write Code With AI?

The Case for Guardrails Over Gatekeeping

Agentforce Vibes launched at TDX a few weeks ago. I used it while I was there to build a Lightning Web Component and deploy it to a scratch org. It's a genuine game-changer.

I came back to the UK and mentioned this, fairly casually, to a Salesforce developer I work with. His reaction stayed with me. He wasn't dismissive, and he wasn't talking about me specifically. He was uneasy about what it pointed to. "Admins are going to do this and ship it," he said. "They don't really understand what they're doing. They'll break production. This isn't what admin means."

I've heard versions of that same conversation for years now, ever since AI coding tools became a practical reality. It plays out in every Salesforce team I work with, usually over Slack or Teams, sometimes a little tensely. Admins and BAs have been discovering, first with GitHub Copilot, then ChatGPT and Claude Code, and now Agentforce Vibes, that these tools can generate working Apex classes, triggers, and LWCs. Developers have been, quietly or not so quietly, pushing back.

It's worth taking that pushback seriously rather than dismissing it. Because both sides have a point, and the honest answer isn't "AI democratises development, get with the programme" any more than it's "keep admins out of the code". The real question is whether the controls that already exist in mature Salesforce DevOps pipelines (branch protections, automated scans, test coverage gates, peer review) are robust enough to extend the circle of who can write code. My honest answer, after years of moving between programme management, admin and consulting work, is that it depends on the team. Some of the review processes I've seen are genuinely rigorous. Others are box-ticking exercises: a quick skim of a diff, a thumbs up, a test suite padded out to hit the 75% coverage gate. The framing matters, and so does what we actually mean when we say "access to code".

What's Actually Changed

The admin-versus-developer divide in the Salesforce ecosystem has always been artificial. The platform was built on a declarative-first philosophy that deliberately blurs the line. Admins have been writing formula fields with conditional logic, building Flows with loops and recursion, and configuring Einstein bots with branching conversation trees for years. That's all programming in everything but name.

What's new is that AI has collapsed the translation barrier between natural language and Apex. An admin can now describe a requirement, "when an Opportunity closes, check for open cases on the Account and reassign them to the CSM team", and get working, bulkified, governor-limit-aware Apex back in seconds. One published comparison reports that Claude Code generates Apex that compiles on the first attempt roughly 85% of the time, compared to around 60% for ChatGPT and 70% for GitHub Copilot on Salesforce-specific code (Clientell, 2026).

That's a real capability shift, and it's why the question has become urgent. The people writing about this from the admin side are enthusiastic: the narrative is framed as "Admin+", admins augmented by AI rather than replaced, with the tooling acting as a skill amplifier that lets them tackle work previously out of reach (CloudAnswers, 2026). The developer community's pushback is real too, and it's not just turf protection.

The Developer Concerns Aren't Wrong

Before making the case for extending access, it's worth airing the legitimate technical objections. They fall into four buckets.

Context blindness. AI coding tools don't reliably understand the org they're generating code for. As Salesforce Ben has documented, generative AI can suggest Apex methods and classes that don't exist, try to query objects and fields that haven't been created, and invoke JavaScript functions that were never written (Valencia, 2024). A developer spots this immediately because they know the codebase. An admin acting on trust might not.

The test problem. It's tempting to let AI write your test classes, but this is a risky approach. The same LLM that produced wrong output can produce wrong tests, and while AI can help initialise test data and suggest scenarios, a human developer ultimately must verify the test is valid (Valencia, 2024). An AI that writes code and writes the tests that pass that code is, as the Gearset team put it, AI marking its own homework (Gearset, 2026). That's a real problem regardless of who's at the keyboard.

Happy-path coding. AI tends to produce code that works for the scenario described and falls over on edge cases. As one Medium analysis of Salesforce vibe coding noted, a prompt may not capture all edge cases, and generated code may deliver a "happy-path" but break under production scenarios (Dhari, 2025). Bulk loads, mixed-DML exceptions, recursion, race conditions, governor limits at scale: these are the things that bite in production, and they're the things a seasoned developer has been burned by enough times to ask about.

Refactoring fragility. This one's particularly acute for LWCs. Research by the CodeScene team, cited in a Salesforce Ben article, found that JavaScript is particularly challenging to refactor due to nuanced syntax, with common patterns where AI makes the code incorrect, inverting boolean logic so that a && b becomes !(a && b), or moving code containing this into a separate function, causing this to lose its intended context (Valencia, 2024). An admin refactoring an LWC with AI assistance may ship a subtle behavioural change without ever realising.

None of these are reasons to lock admins out. But they are reasons why "admin + AI + Deploy to Production" without anything in between is a bad idea.

The Bigger Risk Is Gatekeeping

Here's where the other side of the argument gets interesting. The alternative to giving admins AI-assisted code access isn't a world where all code is written by disciplined, senior developers. It's a world where two things happen.

First, the work doesn't stop being needed. As Vofox's analysis of citizen developer governance puts it, citizen developers often deliver faster than IT can, not necessarily better, not more secure, not more maintainable, but faster. A sales ops person who understands the actual workflow can build a functional tool in a week; IT building the same thing properly takes three months. The fast version ships, people use it, and it becomes critical before IT even finishes requirements gathering (Vofox, 2026). Blocking admin access to AI-assisted code doesn't eliminate demand. It pushes the work into Flows that really should have been Apex, into unsupported third-party apps, or into personal copies of ChatGPT producing code that gets pasted into the developer console with no governance at all. That's worse, not better.

Second, it entrenches a bottleneck that's been a drag on Salesforce programmes for years. The developer resource is always the scarcest on a team. I've watched perfectly good business requirements sit in a backlog for two release cycles because the one Apex developer is busy untangling a trigger framework. Meanwhile an admin who can describe the requirement precisely, test it in a sandbox, and shepherd it through QA is standing right there. The argument that they shouldn't be able to use AI to generate a starting point for that work is, on reflection, hard to sustain.

The citizen development literature is blunt about the choice. Banning citizen development means accepting slower delivery; enabling it means accepting some risk. There's no option that's both perfectly safe and perfectly fast (Vofox, 2026). And as FlowDevs argues in its analysis of low-code governance, the solution isn't to lock down the platform and ban citizen development, doing so creates bottlenecks and stifles innovation. Instead, effective governance balances security with agility, creating a safe lane where users can build fast without crashing the car (FlowDevs, 2026).

What Good Guardrails Actually Look Like

This is where the article has to get specific, because "use guardrails" is the kind of thing everyone nods at and nobody implements. Here's what a credible admin-enabled, AI-assisted code pipeline looks like in practice.

Source control is non-negotiable. Everything lives in Git. Every change is a pull request. No admin, no developer, no one is deploying AI-generated code straight to production via a changeset. This is the single biggest guardrail and it happens to be the one that the Salesforce ecosystem has historically been worst at. If your org doesn't have this yet, the admin-AI question is premature. Solve Git first.

Automated static analysis on every pull request. PMD for Apex is free and a reasonable floor. More sophisticated tooling goes further: Gearset Code Reviews, for example, scans over 300 metadata types across Salesforce environments, Apex classes, triggers, test classes, Flow functionality, Visualforce, Lightning Components, permission sets, profiles, and field-level security, analysing both source code and configuration to identify issues across the entire project (Gearset, 2026). The point of this isn't to replace human review. It's to catch the things humans reliably miss and to do so before the reviewer's time is spent.

Deterministic quality gates, not AI-reviewing-AI. This is subtle but important. If your code was generated with AI assistance and then reviewed with an AI-based reviewer, you've created a loop with no external check. As DevOps Launchpad points out, non-deterministic AI-based review tools don't provide consistent results every time, so you can't guarantee that security and compliance standards are being followed. Without careful prompt engineering and validation frameworks, custom LLM solutions will miss some security issues or generate overconfidence in problematic code (DevOps Launchpad, 2026). Deterministic scanners, the ones that give you the same answer every time given the same input, are what you want at the quality gate.

Mandatory peer review by someone who can read the code. This is the guardrail that most protects against admin-AI risk. The admin can draft the change with AI. The admin can run the tests. But a developer, or another admin with appropriate technical judgement, has to approve the PR before it merges. This is not gatekeeping. This is the same standard developers hold each other to, and extending it to AI-assisted admin work is the obvious move.

Test coverage that means something. The 75% Apex coverage requirement is a floor, not a target, and it's easily gamed by AI-generated tests that assert nothing meaningful. A decent review process checks that tests actually exercise edge cases, negative paths, and bulk scenarios, not just that the coverage number is green.

Tiered approval based on risk. Not every change needs the same rigour. Vofox proposes a sensible tiered model: a development sandbox where anyone can build anything, department-level use with light automated review, enterprise deployment with formal IT and security validation, and production with external access requiring the full governance process. This lets people move fast on low-risk items while maintaining control over high-risk deployments (Vofox, 2026). A utility class that formats phone numbers doesn't need the same scrutiny as a trigger that touches billing data.

Operational tooling that AI can't replace. Even the best AI coding agent has limits. Clientell's comparison testing notes that Claude Code can write an Apex batch job, but it cannot execute it, monitor its progress, handle errors in real time, or roll back if something goes wrong, data operations at scale require tooling that manages the full lifecycle (Clientell, 2026). Knowing where the AI stops and the disciplined deployment tooling starts is a skill admins need to develop as they take on more code-adjacent work.

Is This Developers Protecting Their Skill Set?

Let's address the question directly, because it's the one that the article title is really asking.

Some of it, yes. Not maliciously, most developers I've worked with are generous with their knowledge, but it's human nature to feel uneasy when the things that took you years to learn become accessible to someone who arrived last week with a good prompt. That's a real feeling and it deserves acknowledgement rather than dismissal.

But dig into the specific objections and most of them aren't really about skill-set protection. They're about accountability. A developer who ships broken Apex owns it. They wrote it, they tested it (or didn't), they know what went wrong. When AI writes code and an admin ships it, the accountability chain is murkier. Who reviewed the edge cases? Who understood the governor limit implications? Who's on the hook when the trigger recurses in a bulk load?

The guardrails answer isn't "trust the admin" or "trust the AI". It's "build a process where neither has to be fully trusted, because the process catches mistakes before production does". That's actually the standard we should be holding developer-written code to as well, and in mature shops, we already do.

Where the developer concern is most legitimate is around the erosion of deep platform understanding. It's entirely possible for admin+AI workflows to produce working code without the person shipping it understanding why it works. Over time, that builds a team with a thin bench of people who can debug something gone wrong. This is a real cost, and it argues for pairing AI-assisted admin coding with deliberate learning (Trailhead trails, code review walkthroughs, pairing sessions) rather than treating it as a pure productivity win.

The Honest Conclusion

Where the guardrails are real rather than ceremonial, extending code-adjacent work to admins, augmented by AI, is a net benefit. It expands delivery capacity, reduces the developer bottleneck, and, crucially, doesn't reduce safety if the DevOps foundations are solid. The risks developers raise are real, but they're risks of bad process, not risks of admins-with-AI specifically. A senior developer shipping AI-generated code to production via a changeset with no review is doing something more dangerous than an admin raising a well-scoped PR through a mature pipeline.

The honest version of the argument looks like this. If your Salesforce org has Git-backed source control, automated scanning, mandated peer review, and tiered approval paths, then giving admins AI-assisted code access is a benefit and the developer concerns are largely addressed by the process. If your org doesn't have those things, the question isn't whether admins should be writing code with AI. It's whether anyone on your team should be, because the guardrails that protect you from AI's failure modes are the same ones that protect you from human ones.

That reframing, from "who gets to write code" to "what does our pipeline catch", is the shift the whole ecosystem needs to make. And it's one admins, developers, and architects all have a stake in getting right.

References

Clientell. (2026, April 1). Claude Code for Salesforce admins: What it can and cannot do. https://www.getclientell.com/salesforce-blogs/claude-code-salesforce-admin

CloudAnswers. (2026, February 12). Agentforce Vibes: Can we really use AI to write code as Salesforce admins? https://cloudanswers.com/blog/agentforce-vibes-can-we-really-use-ai-to-write-code-as-salesforce-admins

DevOps Launchpad. (2026, January 26). The best Salesforce code review tools in 2026. https://devopslaunchpad.com/blog/best-salesforce-code-review-tools/

Dhari, G. C. (2025, October 22). Salesforce vibe coding. Medium. https://medium.com/@gadige.sfdc/salesforce-vibe-coding-f88d525a56d5

FlowDevs. (2026, January 3). Sleeping soundly: Why citizen development needs guardrails. https://www.flowdevs.io/blog/post/sleeping-soundly-why-citizen-development-needs-guardrails

Gearset. (2026). Salesforce code review automation solution. https://gearset.com/solutions/code-reviews/

Valencia, E. (cited in Salesforce Ben). (2024, November 13). Can AI refactor your Salesforce code successfully? Salesforce Ben. https://www.salesforceben.com/can-ai-refactor-your-salesforce-code-successfully/

Vofox. (2026). Governance for citizen developers: Avoiding shadow IT with proper guardrails. https://vofox.com/governance-for-citizen-developers/

Read More