Adobe Analytics API 2.0 Customer Journey Analytics API Web Analytics

Multi Dimension and Mult Threading in AnalyticsMulti-Dimensional Reporting in CJA and Adobe Analytics

Your Options Before the 1.4 API SunsetMulti Dimension and Mult Threading in Analytics

If you work with Adobe Analytics on a regular basis, you have probably already heard about it: in August 2026, the 1.4 Reporting API will be fully retired. This is a meaningful change because the 1.4 API was, for a lot of teams, the easiest way to request multi-dimensional reports — meaning a single API call returning a breakdown across several dimensions at once.

With the 1.4 API gone, the question becomes: how do I keep doing multi-dimensional reporting without breaking my existing pipelines?

There are essentially two paths forward, and in this post I’ll walk through both — including the changes I’ve added to my Python wrappers (cjapy and aanalytics2) to make this transition easier.

The two paths are:

  1. Migrate to Customer Journey Analytics (CJA) — the future-proof option.
  2. Stay on Adobe Analytics 2.0 and lean on smarter breakdown logic + multi-threading — the pragmatic option for those who can’t migrate by August 2026.

Let’s review both

Path 1 — Migration to Customer Journey Analytics

The “future-proof” path is to migrate your reporting to Customer Journey Analytics (CJA). Since 2026, the CJA Reporting API natively supports multi-dimensional report requests, which means you can ask for several dimensions in a single call — exactly what the 1.4 API used to enable.

That said, this is easier said than done. For most customers, migrating from Adobe Analytics to CJA is not just an API code change — it’s a full data architecture project. You typically need to:

  • Migrate your Solution Design Reference (SDR) to a Schema Data Model in Adobe Experience Platform (XDM).
  • Migrate your tag implementation from AppMeasurement to the AEP Web SDK.
  • Establish the connection in CJA to your AEP datasets.
  • Set up your Data Views (dimensions, metrics, derived dimensions, attribution settings, etc.).
  • Re-define CJA permissions for your users.

That’s a fairly big lift if your only goal is to keep multi-dimensional reporting working. But if you were already considering moving to CJA, the multi-dimensional API is a strong additional reason to commit. It is, by far, the most sustainable solution.

CJA API limits to keep in mind

Before you go all-in, be aware that CJA API reporting comes with license-based limits:

FoundationSelectPrimeUltimate
Concurrent Report Requests5 per connection6 per connection8 per connection10 per connection
Total Monthly Report Requests500,000 per IMS Organization750,000 per IMS Organization1,500,000 per IMS Organization5,000,000 per IMS Organization

Multidimensional report in cjapy

So even on the most generous tier, large-scale automated reporting needs to be planned carefully.

Multi-dimensional reports in cjapy

To make the multi-dimensional capability easier to use, I’ve updated the cjapy library — specifically the RequestCreator class — so you can now build multi-dimensional report requests directly, without manually crafting the JSON.

Here’s a minimal example:

import cjapy
from cjapy.requestCreator import RequestCreator

cjapy.importConfigFile('myconfig.json')
cja = cjapy.CJA()

# Build the request
myrequest = RequestCreator()
myrequest.setDataViewId("dv_XXXXX")
myrequest.addGlobalFilter('2026-03-08T00:00:00.000/2026-04-07T00:00:00.000')
myrequest.addMetric("metrics/occurrences")

# Single-dimension (legacy, still works but deprecated for multi-dim use)
# myrequest.setDimension("variables/eventType")

# New: multi-dimensional request — accepts a string or a list of strings (max 5)
myrequest.setDimensions([
    "variables/eventType",
    "variables/web.webReferrer.type",
    "variables/_tenant.path.field",
])

# Run the report
workspace = cja.getReport(myrequest)
df = workspace.dataframe   # pandas DataFrame with the multi-dimensional result

A couple of notes on this:

  • setDimension (singular) still works but is now deprecated for multi-dimensional scenarios — setDimensions (plural) is the way forward.
  • You can pass up to 5 dimensions in a single request.
  • Breakdown logic does not apply on multi-dimensional reports — by design, since the multi-dimension request is the breakdown. Trying to chain a breakdown on top will not work.

For most use cases this is now much cleaner than the old “request + breakdown” pattern.

Path 2 — Adobe Analytics 2.0: Smarter Breakdowns & Multi-Threading

However, not everyone is either willing to migrate to Customer Journey Analytics or is able to migrate to this new solution.
You can want to migrate but the work to do the migration won’t be finished by August 2026, therefore you need a solution for your reporting usage.

In Adobe Analytics, independently of your licence setup, you are capped to 500 000 reporting API calls. Note that an API calls that are returning data are counted against the limit and if you need to do a breakdown, it will also count additionally.

So if you need to do these kinds of multidimensional reports very often, the fact that you are doing a report request and then doing multiple breakdowns will increase the number of API calls and you may get closer to these limits.
I would recommend starting to get calculate the number of requests that are needed for your breakdown reports, in order to make sure that you are not reaching the limit.

Breakdown capability

Not every team is willing — or able — to migrate to CJA before August 2026. The migration described above is a multi-quarter project for many organizations, and you may simply need a working multi-dimensional reporting setup on Adobe Analytics in the meantime.

The good news: you can still do this on Adobe Analytics 2.0. The bad news: you need to be careful about API call volume.

The Adobe Analytics API limits

Two limits to keep in mind on Adobe Analytics:

  1. A monthly cap of 500,000 reporting API calls, regardless of your license tier. Additional capacity can be purchased — but be mindful before you blow up your number of requests.
  2. A throughput cap of 12 requests per 6 seconds per client ID.

The tricky part is that every API call that returns data counts against the monthly limit — including each individual breakdown request. So a “single multi-dimensional report” on Analytics 2.0 is actually N API calls under the hood, where N depends on how many items you’re breaking down by.

Before you push your breakdown logic into production, I’d strongly recommend estimating the number of API calls your breakdown reports will generate per month and comparing that to your 500K cap.

How breakdowns work on the 2.0 API

Unlike the 1.4 API (and unlike CJA), the Adobe Analytics 2.0 API does not allow you to request multiple dimensions in one call. The pattern is:

  1. Run a base report to retrieve the items of your first dimension.
  2. For each item returned, run an additional breakdown request on the next dimension.
  3. Repeat for every level of the breakdown.

The getReport2 method and the Workspace class in aanalytics2 already let you do breakdowns dynamically, which is convenient — but when your base dimension has hundreds or thousands of items, runtime becomes the real problem. The 12-requests-per-6-seconds throttle dominates everything.

The aanalytics2 library has built-in retry / backoff to handle the threshold gracefully, but your script still has to wait. If you have 5,000 items to break down, that’s a long-running job no matter how clean your code is.

The unlock: multiple Analytics class instances

Here’s the key insight: the 12-requests-per-6-seconds limit is enforced per client ID, not per user, IMS Org, or report suite. And there’s no limit on the number of Adobe Developer Console projects (and therefore client IDs) you can create within an org.

In other words: with N client IDs, you effectively get N × 12 requests per 6 seconds.

Up until recently, aanalytics2 was built on a “1 user → 1 API connection” assumption, which made this kind of parallelization awkward. With the latest release, that’s no longer the case.

I’ve introduced a return_object parameter on both configure and importConfigFile, which returns an isolated ConfigObj instance instead of writing to the module’s global config state. Each ConfigObj owns its own copies of config and headers — there’s no shared memory between them — which means you can safely run several Analytics instances in parallel, each using a different client ID.

A complete write-up is available here on GitHub.

Putting it all together: parallel breakdown reporting

Here’s a worked example showing the pattern end-to-end. The idea:

Distribute the requests across your instances and run them in parallel using concurrent.futures.f not thousands of items can be tricky, espeically regarding runtime.

Create multiple Analytics instances, each tied to a different client ID.

Run a base report to retrieve all itemIds of the dimension you want to break down.

Build one report request per item.

However, if you know how to build these breakdown, by using the RequestCreator capability that was introduced a while back, you can list the list of items that was generated from the base dimension and build all of the requests you need to do.

The main challenge remains the limit of the Analytics API, that is only allowing 12 requests for 6 seconds. There are some failsafe in the aanalytics2 libray that allows to retry and wait for the threshold to pass, however, it still requires the servers to run for a long time to run that queue.

Multiple Instance of Analytics class

So we had all the tools we needed, almost, but the main bottleneck was the limit imposed by Adobe Analytics API capability. But what is that limit apply on ?

It turns out, that the limit is applied by client IDs.
And with no limit on the numbers of Projects that can generate client IDs, you can technically bypass that limit entirely.

We just had a small limitation, that is the aanalytics2 module was build with the intention of 1 user – 1 API. Therefore, a change was needed.

With the latest release, you have now access to the capability to create multiple instances of the Analytics class in aanalytics2, by importing several configuration file, or configure multiple time, and return an object.

The return_object parameter has now been introduced in both the configure and importConfigFile methods. A complete introduction is available here.

Now, you can scale the retrieval of reports and get a faster reporting through some tweaks in your implementation:

  • Get multiple Projects / Client IDs
  • Use the return_object parameter, one for each client ID, to have several connection capabilities
  • Use a base report to retrieved all itemID you want
  • Build the report requests that you will want to use from that list of item ID
  • Launch the parallization script

NOTE: Adobe Analytics has a limit of 500 000 API reporting request per month. Additional capacity can be purchased, but be mindful before you explode your number of requests.

import aanalytics2 as api2
from aanalytics2 import RequestCreator
from concurrent.futures import ThreadPoolExecutor, as_completed
import copy

# -----------------------------------------------------------------------------
# 1. Create multiple isolated Analytics instances (one per client ID / project)
# -----------------------------------------------------------------------------
# Each config file corresponds to a separate Adobe Developer Console project.
# Using return_object=True ensures each ConfigObj is fully isolated.

cfg1 = api2.importConfigFile('creds_client1.json', return_object=True)
cfg2 = api2.importConfigFile('creds_client2.json', return_object=True)
cfg3 = api2.importConfigFile('creds_client3.json', return_object=True)

# Retrieve the company ID once (any config will do)
login = api2.Login(config=cfg1)
cid = login.getCompanyId()[0]['globalCompanyId']

# Instantiate one Analytics object per config — unpacking the ConfigObj
analytics_pool = [
    api2.Analytics(cid, **cfg1),
    api2.Analytics(cid, **cfg2),
    api2.Analytics(cid, **cfg3),
]

# -----------------------------------------------------------------------------
# 2. Run the base report to get all itemIds of the first dimension
# -----------------------------------------------------------------------------
RSID = "myreportsuite"
DATE_RANGE = "2026-03-01T00:00:00.000/2026-04-01T00:00:00.000"

base_request = RequestCreator()
base_request.setReportSuiteID(RSID)
base_request.setDimension("variables/page")
base_request.addMetric("metrics/visits")
base_request.setDateRange(DATE_RANGE)
base_request.setLimit(10000)   # adjust to your dataset

# Use the first instance for this base call; item_id=True returns the itemIds
base_workspace = analytics_pool[0].getReport2(base_request, item_id=True)
base_df = base_workspace.dataframe

# Extract the itemIds we'll iterate on
item_ids = base_df['itemId'].tolist()
print(f"Base dimension returned {len(item_ids)} items to break down.")

# -----------------------------------------------------------------------------
# 3. Build one breakdown request per item
# -----------------------------------------------------------------------------
def build_breakdown_request(parent_item_id):
    """Return a RequestCreator broken down by a second dimension for a given parent itemId."""
    req = RequestCreator()
    req.setReportSuiteID(RSID)
    req.setDimension("variables/evar3")   # the dimension you're breaking down INTO
    req.addMetric("metrics/visits")
    req.setDateRange(DATE_RANGE)
    # Add a breakdown filter on the parent dimension's itemId
    req.addGlobalFilter(f"variables/page=={parent_item_id}")
    req.setLimit(1000)
    return req

requests = [build_breakdown_request(iid) for iid in item_ids]

# -----------------------------------------------------------------------------
# 4. Distribute requests across instances and run in parallel
# -----------------------------------------------------------------------------
def run_request(instance, request, parent_item_id):
    workspace = instance.getReport2(request)
    df = workspace.dataframe
    df['parent_itemId'] = parent_item_id
    return df

results = []
with ThreadPoolExecutor(max_workers=len(analytics_pool)) as executor:
    futures = []
    for idx, (req, parent_id) in enumerate(zip(requests, item_ids)):
        # Round-robin assignment across the pool
        instance = analytics_pool[idx % len(analytics_pool)]
        futures.append(executor.submit(run_request, instance, req, parent_id))

    for fut in as_completed(futures):
        try:
            results.append(fut.result())
        except Exception as e:
            print(f"A breakdown request failed: {e}")

# Combine everything into a single multi-dimensional dataframe
import pandas as pd
final_df = pd.concat(results, ignore_index=True)
print(final_df.head())

A few practical notes on this pattern:

  • Round-robin distribution is the simplest scheduler, but you could go fancier (e.g. a queue with workers pulling jobs as they free up) for very uneven runtimes per request.
  • The aanalytics2 library’s built-in retry will still kick in if a single instance hits its rate limit — but with multiple client IDs you should hit it far less often.
  • Don’t forget: every breakdown request still counts against your 500,000-per-month quota. Parallelization makes you faster, not cheaper.
  • Test on a small subset first. It’s surprisingly easy to spawn 5,000+ requests by accident.

Which path should you choose?

A quick decision guide:

  • You’re already migrating to CJA, or the migration is on your near-term roadmap → start using cjapy‘s setDimensions now and design your future pipelines around the multi-dimensional API. Path 1.
  • You’re staying on Adobe Analytics for the foreseeable future, or migration won’t be done by August 2026 → invest in the multi-instance + parallelized breakdown pattern. Path 2.
  • You’re somewhere in the middle → it’s reasonable to run both in parallel during the transition: keep AA reporting alive with Path 2, and start porting your most critical multi-dim reports to CJA as soon as your Data Views are ready.

In all cases, the August 2026 sunset of the 1.4 API is a hard deadline you can’t ignore. The earlier you map out your reporting dependencies on it, the smoother this transition is going to be.

Leave a Reply

Your email address will not be published. Required fields are marked *