Most enterprises that run an application for many customers, stores, or business units end up in the same spot: one PDB per tenant, and a growing tax every time the application changes. Patches multiply. Reference data drifts. Cross-tenant reporting becomes its own project. The Multitenant architecture solves the “one engine, many databases” problem, but on its own it does not solve the “one application, many tenants” problem.
Application Containers do.
Application Containers are a feature of Oracle Multitenant that lets you define an application once, schema, code, reference data, version, etc…, in a specialized PDB called the Application Root, and then replay that definition into any number of Application PDBs that act as isolated tenant databases. The tenant data stays local. The application definition stays centralized.
The patching cycle goes from N times to one.
In a previous post, I walked through the CONTAINERS clause for cross-PDB reporting from CDB$ROOT. This post is the bigger pattern that sits one layer up. The CONTAINERS clause shows up again at the end, and the reason it works so cleanly inside an Application Container is the whole point of the architecture.
The shape of the pattern
An Application Container has three roles:
| Role | What it is | What it holds |
|---|---|---|
| Application Root | A specialized PDB created with AS APPLICATION CONTAINER | The application definition: shared schemas, PL/SQL, reference data, version metadata |
| Application Seed | An optional, auto-named PDB (<root_name>$SEED) | A pre-synced template, used to provision new Application PDBs fast |
| Application PDBs | The tenant databases | Tenant-local rows; same application definition replayed from the Root |
The Application Root is a PDB itself, it sits under the CDB root like any other PDB, but it owns the Application PDBs beneath it. Everything in the container hangs off the Root.
CDB│└── Application Root <- the Application Container starts here │ ├── Shared application definition (schemas, PL/SQL, reference data, versions) │ ├── Application Seed ├── Application PDB 1 ├── Application PDB 2 └── Application PDB 3
A national retailer scenario maps onto this cleanly: the Application Root is corporate HQ holding the sales application, the Application PDBs are the individual stores, and the Application Seed is the template used to onboard a new store fast. Tax codes and country lookups are managed once at HQ and visible everywhere. Customer orders and per-store inventory stay local to each store. That mapping is exactly what the demo below builds.
Why this matters for an enterprise
Before the code, let’s do a quick compare of application developent and operational management without and with Application Containers.
| Without Application Containers | With Application Containers |
|---|---|
| One copy of the application per tenant PDB. Patches and upgrades run N times. | One application definition in the Root. One install. One upgrade. Replayed into every tenant PDB. |
| Reference data (tax codes, country lookups, product hierarchies) drifts between tenants. | Reference data lives in the Root and is consistent across every tenant by definition. |
| Cross-tenant reporting requires mapped metadata views or database links or staging pipelines. | A single SQL session in the Root queries every tenant via the CONTAINERS clause. |
| Provisioning a new tenant means another full application install. | A new Application PDB is cloned from the Application Seed and is online with the app already loaded. |
| No native version control over the application as a whole. | Versions and patches are first-class objects, captured and replayed with full audit visibility. |
The demo environment
The walkthrough below uses the following names. Adjust to your environment.
| Identifier | Value |
|---|---|
| CDB | FREE |
| Application Root | app_root |
| Application Seed | app_root$SEED (auto-named) |
| Application PDBs | sales_pdb1, sales_pdb2 |
| Application name | sales_app |
| Application common user | sales_app_user |
Connect as a privileged user (e.g. SYS AS SYSDBA) before running anything.
Step 1: Create the Application Root
From the CDB root:
ALTER SESSION SET CONTAINER = CDB$ROOT;CREATE PLUGGABLE DATABASE app_root AS APPLICATION CONTAINER ADMIN USER app_admin IDENTIFIED BY "MyPassword_123";ALTER PLUGGABLE DATABASE app_root OPEN;ALTER PLUGGABLE DATABASE app_root SAVE STATE;
AS APPLICATION CONTAINER is the clause that distinguishes this PDB from a regular tenant PDB. ADMIN USER app_admin is required when creating any PDB from the seed, Oracle creates this user as a local user in the new container and grants it the PDB_DBA local role. It is the scoped administrator for the Root, separate from SYS and SYSTEM. SAVE STATE makes the Root auto-open on instance restart.
Step 2: Open the install capture window
Connect into the Application Root and start an application install:
ALTER SESSION SET CONTAINER = app_root;ALTER PLUGGABLE DATABASE APPLICATION sales_appBEGIN INSTALL '1.0';
This is where Application Containers diverge from “just another PDB.” Everything you run between BEGIN INSTALL and END INSTALL is captured as part of the application definition for version 1.0. The Root’s data dictionary remembers the statements, in order. PDBs that later sync to this application will replay those statements in the same order against their own dictionaries.
It is a statement journal, not a metadata link.
Step 3: Create the application common user
CREATE BIGFILE TABLESPACE app_data DATAFILE SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 2G;CREATE USER sales_app_user IDENTIFIED BY "MyPassword_123" DEFAULT TABLESPACE app_data TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON app_data CONTAINER = ALL;GRANT CREATE SESSION, CREATE TABLE, CREATE VIEW, CREATE PROCEDURE, DB_DEVELOPER_ROLETO sales_app_userCONTAINER = ALL;
The CONTAINER = ALL clause makes sales_app_user an application common user, it exists in the Application Root and in every Application PDB that belongs to the Root. Application common objects are owned by application common users, so this user is the one that will own the shared schema in a moment.
The dedicated app_data tablespace avoids ORA-01950 quota errors and avoids assuming a USERS tablespace exists. Small thing, but it saves you from tablespace problems in the future.
Step 4: Define the shared application objects
Three sharing models exist for objects created during an application install. They are the heart of the design.
| Model | What’s shared | What’s local |
|---|---|---|
SHARING = METADATA | Object structure (columns, constraints, triggers, code) | Row data — separate in every PDB |
SHARING = DATA | Structure and rows | Nothing — read-only in PDBs |
SHARING = EXTENDED DATA | Structure and shared base rows | Each PDB may add its own rows |
METADATA is the default. Most application objects use it.
-- Per-store product catalog: same columns everywhere, different rows per storeCREATE TABLE sales_app_user.product_catalogSHARING = METADATA( product_id NUMBER GENERATED BY DEFAULT AS IDENTITY, product_name VARCHAR2(100) NOT NULL, category VARCHAR2(50), list_price NUMBER(10,2), CONSTRAINT product_catalog_pk PRIMARY KEY (product_id));-- Tax codes: same rows in every store, read-only in tenant PDBsCREATE TABLE sales_app_user.tax_codesSHARING = DATA( tax_code VARCHAR2(20) PRIMARY KEY, description VARCHAR2(100), tax_rate NUMBER(5,2));INSERT INTO sales_app_user.tax_codes (tax_code, description, tax_rate)VALUES ('STANDARD', 'Standard tax rate', 6.25);INSERT INTO sales_app_user.tax_codes (tax_code, description, tax_rate)VALUES ('REDUCED', 'Reduced tax rate', 2.50);COMMIT;-- Customer orders: same structure, tenant-local rowsCREATE TABLE sales_app_user.customer_ordersSHARING = METADATA( order_id NUMBER GENERATED BY DEFAULT AS IDENTITY, customer_id NUMBER NOT NULL, product_id NUMBER NOT NULL, order_amount NUMBER(10,2), order_ts TIMESTAMP DEFAULT SYSTIMESTAMP, CONSTRAINT customer_orders_pk PRIMARY KEY (order_id));
For the retailer: every store gets the same product_catalog and customer_orders table definition but each store’s rows stay in its own PDB. Tax codes are managed once at HQ and replicated to every store as data-linked rows. If HQ updates a tax rate, that change becomes visible to every store at the next sync.
A common misconception: SHARING = METADATA does not mean the object is a pointer. The structure is recorded in the Root, but when a PDB syncs, a real table segment is created in that PDB. PDBs hold replayed copies, not links.
Step 5: Add shared code
Let’s add two PL/SQL packages, one for generating tenant data on demand, one for surfacing the application version.
CREATE OR REPLACE PACKAGE sales_app_user.demo_data_genAS PROCEDURE generate_orders( p_row_count IN NUMBER DEFAULT 100 );END;/CREATE OR REPLACE PACKAGE BODY sales_app_user.demo_data_genAS PROCEDURE generate_orders( p_row_count IN NUMBER DEFAULT 100 ) IS BEGIN FOR i IN 1 .. p_row_count LOOP INSERT INTO sales_app_user.customer_orders (customer_id, product_id, order_amount) VALUES ( TRUNC(DBMS_RANDOM.VALUE(1000, 9999)), TRUNC(DBMS_RANDOM.VALUE(1, 25)), ROUND(DBMS_RANDOM.VALUE(10, 5000), 2) ); IF MOD(i,1000)=0 THEN COMMIT; END IF; END LOOP; COMMIT; END generate_orders;END;/GRANT SELECT ON SYS.DBA_APPLICATIONS TO sales_app_user;CREATE OR REPLACE PACKAGE sales_app_user.sales_utilAS FUNCTION app_version RETURN VARCHAR2;END;/CREATE OR REPLACE PACKAGE BODY sales_app_user.sales_utilAS FUNCTION app_version RETURN VARCHAR2 IS l_version VARCHAR2(4000); BEGIN SELECT app_name || ' version ' || app_version INTO l_version FROM sys.dba_applications WHERE app_name = 'SALES_APP' AND app_implicit = 'N'; RETURN l_version; EXCEPTION WHEN NO_DATA_FOUND THEN RETURN 'SALES_APP version not found'; END app_version;END;/
Both packages are metadata-linked. The code lives once in the Root. Every PDB gets the same code via replay during sync. The sales_util.app_version function will be useful later as a one-line verification that an upgrade actually landed in a given PDB.
Step 6: Close the capture window
ALTER PLUGGABLE DATABASE APPLICATION sales_appEND INSTALL;
The application is now registered in the Application Root at version 1.0. You can confirm:
SELECT app_name, app_version, app_statusFROM dba_applicationsORDER BY app_name;
Step 7: Create the Application Seed
The Application Seed is a special PDB that holds a pre-synced template of the application definition. Provisioning new Application PDBs from the Seed is faster than re-running the install, and it guarantees the new tenant starts at a known version.
ALTER SESSION SET CONTAINER = app_root;CREATE PLUGGABLE DATABASE AS SEED ADMIN USER seed_admin IDENTIFIED BY "MyPassword_123";ALTER PLUGGABLE DATABASE app_root$seed OPEN;ALTER SESSION SET CONTAINER = app_root$seed;ALTER PLUGGABLE DATABASE APPLICATION sales_app SYNC;
The seed name is auto-derived: <application_root_name>$SEED. You do not specify it. After the SYNC, the seed contains the version 1.0 application definition and is ready to serve as a clone source.
Step 8: Create the tenant Application PDBs
Now create two stores. Because the current container is the Application Root and an Application Seed exists, Oracle clones the new PDBs from the Seed automatically.
ALTER SESSION SET CONTAINER = app_root;CREATE PLUGGABLE DATABASE sales_pdb1 ADMIN USER pdb_admin IDENTIFIED BY "MyPassword_123";CREATE PLUGGABLE DATABASE sales_pdb2 ADMIN USER pdb_admin IDENTIFIED BY "MyPassword_123";ALTER PLUGGABLE DATABASE sales_pdb1 OPEN;ALTER PLUGGABLE DATABASE sales_pdb2 OPEN;
That implicit behaviorm, “clone from the Seed if one exists”, is the single biggest provisioning advantage of the pattern. A new tenant comes online with the application already there.
Step 9: Sync the tenant PDBs
Even though the Seed was synced, each new Application PDB still needs an explicit sync to register the application against its own data dictionary:
ALTER SESSION SET CONTAINER = sales_pdb1;ALTER PLUGGABLE DATABASE APPLICATION sales_app SYNC;ALTER SESSION SET CONTAINER = sales_pdb2;ALTER PLUGGABLE DATABASE APPLICATION sales_app SYNC;
What SYNC actually does is replay the captured install (and any later upgrades or patches) against the PDB’s dictionary. After this completes, the PDB has the version 1.0 application registered and ready to use.
Step 10: Insert tenant-specific data
Each store gets its own rows in the metadata-linked tables.
ALTER SESSION SET CONTAINER = sales_pdb1;INSERT INTO sales_app_user.product_catalog (product_name, category, list_price)VALUES ('Laptop', 'Electronics', 1299.99);INSERT INTO sales_app_user.customer_orders (customer_id, product_id, order_amount)VALUES (1001, 1, 1299.99);COMMIT;ALTER SESSION SET CONTAINER = sales_pdb2;INSERT INTO sales_app_user.product_catalog (product_name, category, list_price)VALUES ('Desk Chair', 'Office', 249.99);INSERT INTO sales_app_user.customer_orders (customer_id, product_id, order_amount)VALUES (2001, 1, 249.99);COMMIT;
Connecting to each PDB and selecting from sales_app_user.customer_orders returns only the rows for that PDB. The structure is shared. The rows are not. Connecting and selecting from sales_app_user.tax_codes returns the same two rows in both PDBs because that table is SHARING = DATA, its rows live in the Root and every PDB sees them.
That is the entire mental model in one paragraph.
Step 11: Upgrade the application
This is where the lifecycle benefit shows up. The retailer wants to track order status, a new column plus a supporting index. Without Application Containers, this is N alter scripts to run, N indexes to create, N validation checks. With Application Containers it is one upgrade in the Root and a sync per PDB.
ALTER SESSION SET CONTAINER = app_root;ALTER PLUGGABLE DATABASE APPLICATION sales_appBEGIN UPGRADE '1.0' TO '2.0';ALTER TABLE sales_app_user.customer_ordersADD order_status VARCHAR2(30) DEFAULT 'NEW';CREATE INDEX sales_app_user.customer_orders_cust_ixON sales_app_user.customer_orders (customer_id);ALTER PLUGGABLE DATABASE APPLICATION sales_appEND UPGRADE;
Then sync each tenant:
ALTER SESSION SET CONTAINER = sales_pdb1;ALTER PLUGGABLE DATABASE APPLICATION sales_app SYNC;ALTER SESSION SET CONTAINER = sales_pdb2;ALTER PLUGGABLE DATABASE APPLICATION sales_app SYNC;
Validation from a tenant PDB:
ALTER SESSION SET CONTAINER = sales_pdb1;DESC sales_app_user.customer_orders;SELECT index_name, table_name, statusFROM dba_indexesWHERE owner = 'SALES_APP_USER';SELECT sales_app_user.sales_util.app_version FROM dual;
The last query returns SALES_APP version 2.0. The version function reads sys.dba_applications from inside the tenant PDB and the fact that it returns the correct version is itself the proof that the metadata-linked dictionary view is reflecting the replayed application state.
Production scale impact: imagine running the same BEGIN UPGRADE / DDL / END UPGRADE block once and then iterating SYNC across 500 store PDBs from a script. The compute cost per tenant is essentially the time to replay a small captured DDL set against the local dictionary. There is no application install per tenant.
Step 12: Generate volume and run cross-store reporting
The retailer’s HQ wants to see what’s happening across stores. Generate different data volumes so the aggregates are interesting:
ALTER SESSION SET CONTAINER = sales_pdb1;BEGIN sales_app_user.demo_data_gen.generate_orders(1000);END;/ALTER SESSION SET CONTAINER = sales_pdb2;BEGIN sales_app_user.demo_data_gen.generate_orders(50000);END;/
Now query across stores from the Application Root with the CONTAINERS clause:
ALTER SESSION SET CONTAINER = app_root;SELECT con_id, con_id_to_con_name(con_id) AS pdb_name, order_status, COUNT(*) AS order_count, COUNT(DISTINCT customer_id) AS unique_customers, ROUND(SUM(order_amount), 2) AS total_order_amount, ROUND(AVG(order_amount), 2) AS avg_order_amountFROM CONTAINERS(sales_app_user.customer_orders)GROUP BY con_id, con_id_to_con_name(con_id), order_statusORDER BY con_id, order_status;
This is the same CONTAINERS clause I covered in my earlier post on cross-PDB reporting. The difference is that inside an Application Container, the cross-PDB query is essentially free to write, no setup of common views per PDB, no CONTAINER_DATA configuration for a reporting user. The metadata-linked table already exists in every PDB by definition, with the same column shape, owned by the same application common user. The CONTAINERS clause just works.
For a retailer, that means HQ analytics, total revenue per store, top SKUs per region, customer overlap, daily order volume by store, are one query from the Application Root. No ETL. No staging warehouse for routine operational reporting.
Add a parallel hint when the data gets large:
SELECT /*+ parallel(8) */ con_id, con_id_to_con_name(con_id) AS pdb_name, COUNT(*) AS order_count, ROUND(SUM(order_amount),2) AS total_salesFROM CONTAINERS(sales_app_user.customer_orders)GROUP BY con_id, con_id_to_con_name(con_id)ORDER BY con_id;
Under the hood: capture and replay
The thing that ties this all together, and the thing I would want any architect evaluating this pattern to understand, is that Application Versioning is implemented as a statement journal.
When BEGIN INSTALL or BEGIN UPGRADE opens a capture window, Oracle records the DDL and DML you run, in order, tagged with the application name and version. The capture is filtered by the session’s module name so background process activity and unrelated user statements are not mixed in. When END closes the window, the captured statements form an immutable, sequence-numbered version of the application.
When SYNC runs in an Application PDB, Oracle reads the journal and replays the captured statements against that PDB’s dictionary, advancing a per-PDB “last replayed” marker. Successful statements are removed from the local journal. Failed statements land in DBA_APP_ERRORS for investigation.
The architectural implications:
- Metadata-linked objects are not pointers. Each PDB has real, replayed copies.
DROP TABLEin one PDB does not affect the others. - Replay is deterministic. Two PDBs that sync to version
2.0get the same sequence of statements applied to their dictionaries. - Audit and lineage are first-class. Every captured statement is queryable.
The views you reach for when investigating:
| View | Purpose |
|---|---|
DBA_APPLICATIONS | Registered applications, current version, status, capture module |
DBA_APP_VERSIONS | Version history |
DBA_APP_PATCHES | Patch history |
DBA_APP_STATEMENTS | Captured statement text per application |
DBA_APP_PDB_STATUS | Sync status of each Application PDB |
DBA_APP_ERRORS | Errors encountered during replay |
If you have ever spent a release weekend chasing “did this DDL run in PDB47?”, these views answer that question without leaving SQL*Plus.
What this gives an enterprise
Pulling back from the SQL, the pattern delivers five things that matter at scale:
| Outcome | Mechanism |
|---|---|
| Lower application maintenance cost | One install, one upgrade, one patch — replayed N times by Oracle, not by your release pipeline |
| Consistent reference data | SHARING = DATA for lookups, tax codes, country codes — managed once at HQ |
| Tenant isolation preserved | SHARING = METADATA keeps per-tenant rows local; security boundaries between PDBs unchanged |
| Fast provisioning | The Application Seed makes new tenant PDB creation a clone-and-open operation |
| Native cross-tenant reporting | The CONTAINERS clause queries metadata-linked tables across every open PDB from one session |
Add the audit trail in DBA_APP_STATEMENTS and DBA_APP_PDB_STATUS, and you also get a governance story: every application change is captured, versioned, and traceable per tenant. That tends to matter in regulated industries, financial services, healthcare, public sector, where “prove every tenant got the patch” is a real question.
Things to watch
A few practical reminders before you build this in your own environment.
- Set the session module name explicitly during install. Capture is filtered by
DBMS_APPLICATION_INFO.SET_MODULE. If a connection pool changes modules mid-install, captured statements can be incomplete. Set it at the top of every install script. - Application common users own the shared objects. Use
CONTAINER = ALLwhen you create the application user, and create application objects under that user during the capture window. - Application PDBs need an explicit
SYNCafter creation. Cloning from a synced Seed is not the same as syncing, the new PDB still needs to register the application against its own dictionary. - Oracle Managed Files is effectively required. Tim Hall’s Oracle-Base writeup flags this, synchronization can fail in non-OMF environments without careful
FILE_NAME_CONVERTsettings. - The Application Root must be empty before it can be dropped. Drop the Application PDBs first, then the Seed, then the Root. Run the cleanup as a separate script, accidental “select all -> execute” of the demo will take your demo environment with it.
CONTAINERSqueries only return data from open PDBs. Closed tenants are invisible to cross-PDB reporting.
Running the full demo script
I packaged the entire demo into a single SQL script meant to be run from SQL*Plus or SQLcl while connected to CDB$ROOT as SYSDBA:
sqlplus / as sysdba @application_containers_demo.sql
The script has 25 numbered sections, matching the steps above plus the second upgrade, the validation queries, and the cleanup section at the end. Cleanup is the last section and is clearly marked as destructive, it drops the Application PDBs, the Seed, and the Application Root in dependency order.
Cleanup
If you want to tear down the demo:
ALTER SESSION SET CONTAINER = CDB$ROOT;ALTER PLUGGABLE DATABASE sales_pdb1 CLOSE IMMEDIATE;DROP PLUGGABLE DATABASE sales_pdb1 INCLUDING DATAFILES;ALTER PLUGGABLE DATABASE sales_pdb2 CLOSE IMMEDIATE;DROP PLUGGABLE DATABASE sales_pdb2 INCLUDING DATAFILES;ALTER PLUGGABLE DATABASE app_root$seed CLOSE IMMEDIATE;DROP PLUGGABLE DATABASE app_root$seed INCLUDING DATAFILES;ALTER PLUGGABLE DATABASE app_root CLOSE IMMEDIATE;DROP PLUGGABLE DATABASE app_root INCLUDING DATAFILES;
Final thought
Application Containers are not the right answer for every multitenant deployment. If your tenants run materially different applications, or your operating model is “every tenant is on their own patch level forever,” a plain PDB-per-tenant design is simpler. But for the cases where the application is genuinely the same code running across many isolated tenants, SaaS platforms, multi-location retail, multi-region financial systems, regulated workloads where governance over the application lifecycle is the differentiator, the math tilts hard toward this pattern.
One application. One upgrade. Many tenants. Cross-tenant reporting in one SQL session.
It is one of the more underused features in Oracle Multitenant, and it deserves a second look.
Full demo code available here —> https://github.com/oramatt/mySQLScripts/blob/main/application_containers_demo.sql
References
- Oracle Database Concepts: Application Containers
- Oracle Multitenant Administrator’s Guide: Administering Application Containers
- Oracle Multitenant Administrator’s Guide: Creating Application Seeds
- Oracle Database SQL Language Reference: CREATE PLUGGABLE DATABASE
- Oracle Database Reference: DBA_APPLICATIONS
- Oracle Database Reference: DBA_APP_STATEMENTS
- Tim Hall: Multitenant — Application Containers in Oracle Database 12c Release 2 (Oracle-Base)
- Franck Pachot: 12c Multitenant internals — PDB replay DDL for common users (dbi-services)
- My earlier post: Query Data Across Multiple PDBs with the Oracle Database CONTAINERS Clause