We bound IBM Verify's Rich Authorization Requests to HashiCorp Vault dynamic credentials. We built a Vault plugin for it. The database role that ran your transaction did not exist before you tapped Approve. It will not exist after.
You ask the banking agent to transfer one thousand dollars from checking to savings. Your phone buzzes. You glance at the push notification. Tap Approve. Two seconds later the money has moved.
What you never see is what happened in the database.
About a hundred milliseconds before the SQL ran, a Postgres user with a name like v-banking-transfers-0fe14f3c14c98efb did not exist. It was created. It was granted INSERT, UPDATE on accounts and transactions, and only those two tables. Three SQL statements ran. Two updates and one insert. Then ownership was reassigned, the role was dropped, and the user was gone. Total live time on the order of a hundred milliseconds, with a five minute TTL ceiling above that as the safety net.
That ephemeral user is what you get when authorization stops being a thing the application carries around and starts being a thing the credential is. The credential matched the request, not the role. The transfer agent did not get a Postgres user that could drop tables. It did not get a Postgres user that could read customer records. It got a user that could move two amounts and write one audit row. For five minutes, max. And then the user vanished from the database, the way a transient identity should.
The Question
What if every time an agent calls a backend, the credential it uses is bound to the specific transaction the user just approved on their phone? Not the role. Not the scope. The transaction.
This post is the answer to that question. With running code.
HashiCorp publishes a validated pattern for AI agent identity with Vault. It is good. The pattern keys Vault dynamic credentials on RBAC group claims in the agent's JWT. If the JWT carries a group named accounts_writer, the agent can read database/creds/accounts-writer. If it carries reports_reader, it can read database/creds/reports-reader.
This works exactly as designed for human-shaped workloads. A workload has a job. The job has a role. The role has groups. The groups map to dynamic credentials with the GRANTs that role needs. Static identity, dynamic creds. The TTL on the credential keeps the blast radius small even when the workload is compromised.
An agent does not have a role in that sense. An agent has a session that may handle hundreds of requests, and the requests do not all need the same authority. A user might ask the agent to look up an account balance, then transfer money, then file a tax form, all in the same chat. Each one of those is a different transaction with different stakes. Mapping them all to the same RBAC-keyed credential gives the agent every entitlement of every group the user belongs to, on every call, regardless of what the user actually asked the agent to do.
The pattern is right about the engine. Vault dynamic credentials are the right abstraction. The pattern is also right about the TTL. Five minutes is plenty for a transaction and far too short for a stolen credential to do harm. What the pattern is missing is the answer to the question "what changed about the request that should change the credential."
The IETF published RFC 9396, Rich Authorization Requests, in 2024. RAR is a standard way to attach a JSON object to an OAuth2 token request that describes the operation, the resource, the affected party, and any other context the policy engine cares about. It looks like this when an agent asks for permission to transfer money.
RFC 9396 · authorization_details{ "type": "urn:smt:agent:banking", "operationDetails": { "action": "transfer_funds" }, "instructedAmount": { "currency": "USD", "amount": 1000 } }
IBM Verify supports RAR natively at /oauth2/token. The agent declares authorization_details as part of an RFC 8693 token-exchange request. Verify's policy engine reads the JSON and decides whether to mint the OBO token, and whether to require an MFA challenge first. The push notification body can render fields out of the RAR so the user sees the actual operation. Not "an application would like to access your data." Not "an agent would like to perform an action." The transfer amount, the source, the destination, all in plain language on the screen of the phone in your hand.
The piece that was missing in the RAR story was on the credential side. Verify can describe the operation. Verify can decide on the operation. The push body can prove the operation. What was nowhere in the system was a credential issuer that read the RAR and produced a credential matched to it. So we wrote one.
It is a HashiCorp Vault secrets engine, written in Go, called vault-plugin-secrets-verify-rar. It plugs into a Vault server the same way Vault's own database engine does. You enable it at a mount path. You configure it with a Postgres connection (can and will support other databases). You configure it with one or more roles. Each role declares a set of rar_mappings that bind a RAR shape to a set of Postgres GRANTs.
vault · banking-transfers role{ "db_name": "agentic-db", "max_ttl": "10m", "rar_mappings": { "urn:smt:agent:banking|transfer_funds": { "grants": ["GRANT banking_transfers TO \"{{name}}\""], "ttl": "5m" } } }
When a workload calls verify-rar/creds/banking-transfers with a JWT, the plugin extracts authorization_details[0] and jti from the claims. It builds a mapping key by joining type and operationDetails.action with a pipe. It looks up the role's rar_mappings. On a match the plugin opens a privileged Postgres connection, runs CREATE ROLE, runs the role's grants, sets a VALID UNTIL clock, and returns username plus password to the workload. On a miss it returns 403 with a structured audit event that names the type and action the caller tried to use. The lease metadata carries the jti and a SHA-256 fingerprint of the full authorization_details, so the credential issuance can be joined to every other event in the system.
What the Plugin Is, and Is Not
It is a credential issuer. It reads authorization_details, matches them against a configured mapping, mints a Postgres user with a tightly scoped GRANT, and emits one JSON audit line per decision. It registers RevokeEphemeralUser as the lease callback so Vault automatically drops the role on TTL or revoke.
It is not a policy engine. It does not authenticate the JWT. Vault's JWT auth method already did that. It does not decide whether the request is allowed. IBM Verify already did that, with a policy that reads the same RAR. The plugin issues the credential that proves both upstream decisions were made and that the credential cannot do anything except what those decisions allowed.
The repository is up at github.ibm.com/rgraham/verify-rar-vault-plugin. (might not be public yet) MPL-2.0. The integration guide alongside the code is the day-of-implementation checklist. The README is the design rationale.
Here is the full chain, captured from the live demo on April 26, 2026. The user is signed into the banking agent. They type "transfer 1000 from checking to savings." The agent goes to work.
The agent presents the OBO token to verify-rar/creds/banking-transfers. The plugin reads authorization_details, sees type=urn:smt:agent:banking and action=transfer_funds, looks up the role's mapping, and issues this audit event before it returns the credential.
verify-rar:cred_issued{ "type": "verify-rar:cred_issued", "role": "banking-transfers", "jti": "9c82655b-4cb6-4a76-936f-669a2cd7342c", "rar_type": "urn:smt:agent:banking", "rar_action": "transfer_funds", "fingerprint": "a1b2c3...", "username": "v-banking-transfers-0fe14f3c14c98efb", "leaseId": "verify-rar/creds/banking-transfers/jequik1k8w4...", "leaseDurationSec": 300 }
That same jti appears on the IBM Verify SSO event log, on Vault's own audit log, on the plugin's audit emission, and on the audit row Postgres writes when the agent runs INSERT INTO transactions. One join key flows from the push notification on your phone to the GRANT statement on the database. Forensics on a $1,000 transfer becomes a WHERE jti = '...' away.
Set log_statement = 'mod' on the Postgres instance. Then watch the log during a single transfer. This is what the database sees, in order, with the second column showing which Postgres user issued the statement.
The plugin's privileged user (vault_rar_admin) runs CREATE ROLE "v-banking-transfers-..." WITH LOGIN PASSWORD '...' VALID UNTIL '...'. The VALID UNTIL timestamp is five minutes in the future.
vault_rar_admin runs GRANT banking_transfers TO "v-banking-transfers-...". The pre-baked role banking_transfers already holds the only privileges the agent needs. INSERT, UPDATE on two tables. Nothing else.
The agent (now logged in as the ephemeral user) debits the source account.
The agent credits the destination account.
The agent writes the audit row. It carries audit_pg_user, audit_jti, and audit_grant_id. These are the join keys that point at the credential just issued and the Verify event that gated it.
vault_rar_admin reassigns any objects the ephemeral user happened to own back to itself. There usually are none, but the call is unconditional, because Postgres refuses to drop a role that owns objects.
vault_rar_admin drops the ephemeral user. The credential is now physically incapable of logging in again, even if it leaked. The lease is closed early; the TTL is a backstop, not the primary control.
The whole sequence runs in about 108 milliseconds in normal operation. The only slow link in the chain is the user thumbing Approve on their phone. Everything else, SPIFFE attestation, Vault auth, plugin lookup, Postgres role lifecycle, runs at infrastructure speed.
The Forensic Story
The audit row in transactions is permanent. It records the ephemeral user that ran the SQL. It records the Vault grant identifier. It records the jti. Everything you need to reconstruct the transaction is on disk forever. The user that ran the SQL no longer exists. The receipt remains.
Here is the headline I want a security architect to take away from this post. HashiCorp's pattern uses Vault dynamic credentials keyed on RBAC. Our plugin uses Vault dynamic credentials keyed on RAR. Same engine. Narrower key. The TTL discipline is the same. The lease lifecycle is the same. The audit story is similar. The credential issuance changes from "what role does this caller have" to "what is this caller about to do."
Which Pattern, When
If your workload has a stable role and a stable job, the HashiCorp validated pattern is fine. RBAC group claim, dynamic credential, TTL discipline. That works, just not as secure or dynamic.
If your workload is an agent acting on per-request authority that the user just consented to on their phone, this is the pattern that matches. The credential follows the request, not the agent.
That single substitution rearranges the security model in a useful way. The agent's authority no longer follows the agent across requests. It follows the request across the agent. Two transfers with two different amounts get two different ephemeral users with the same GRANT but different audit trails. A read of customer data and a write of customer data get the same plugin, the same engine, two different roles, and two different credentials. The agent does not carry standing privilege between calls. Standing privilege only exists for as long as one transaction runs.
The other useful property is the audit join key. jti is on the JWT. jti is on the verify-rar audit emission. jti is on the Vault audit log. jti is on the Postgres audit row. One value flows from the push notification on the phone to the database GRANT. The forensic question "which transfer did this credential authorize" becomes a single SQL pivot, and the answer is a continuous chain of evidence from end to end.
The credential matched the request,
not the role.
For five minutes, max. And then the user vanished from the database.
The plugin is small. About 1,500 lines of Go, including tests. It is one Vault server endpoint, one Postgres connection, a handful of role-mapping schema fields.
RFC 9396 was already in the stack. Vault was already in the stack. IBM Verify was already in the stack. The piece that was missing was a credential issuer that read the RAR. Now it exists. Per-transaction database privilege is no longer a thought experiment. It is two HCL configs, one Vault plugin, and a wrapper that knows how to ask. The agent's authority to mutate your data lives five minutes and dies when the transaction does.