Dynamically extract data from Workday Adaptive Planning using Google Cloud Run (Part 1/2)

Jérémy Dessalines
3 min readMay 31, 2023

I’m deploying for the second time a Workday Adaptive Planning project.
As part of those project you need to:

  • Extract data from your source ERP,
  • Import data into Workday Adaptive Planning,
  • Extract data from Workday Adaptive Planning

In this article, I will share how to extract data from the Workday Adaptive Planning.

I will share a specific use case, how to:

  • Extract data from Workday Adaptive Planning using APIs
  • Use Google Cloud Run, Cloud Scheduler, Secret Manager, Pub/Sub and BigQuery to store the result
High Level Architecture

The goal is to compare data from Workday with data from our source ERP so that we can guarantee data is reliable in Workday Adaptive Planning.

This post is the first part of the use case, the second will explain how to schedule and run this function dynamically.

How to extract data from Workday Adaptive Planning

I will use the exportData method of the Workday Adaptive Planning API.

ExportData method description:

This method’s request contains the parameters that will be used to search the data in the specified version and return values which match the requested filters and format. This is the basic method used to retrieve data from Adaptive Planning, and can be used to retrieve values from any accounts, including standard accounts, GL accounts, modeled accounts, cube accounts, custom accounts, metric accounts, assumptions, and exchange rates.

This will be a flexible approach to retrieve values from Adaptive Planning.

To be able to access data you will need:

  • Login
  • Password

Login & Password can be collected in your Workday instance. You can use a dedicated user to perform calls to the APIs.

Secret Manager

I will use Secret Manager to securely store the credentials in a JSON.

Create a Secret Manager called “adaptive_login” in your Google Cloud project:

Specify your “exportData” Query

exportData call must contain exactly one element of each of the listed types:

  • Credentials (defined above)
  • Version
  • format

A request also may contain one of the following elements:

  • Filters
  • accounts > account
  • levels > level
  • dimensionValues > dimensionValue
  • timeSpan
  • dimensions > dimension
  • rules > currency

For this example I will defined on my xml call:

In my code I will set default values, you will need to set each one of them.

Google BigQuery

Now we are going to send the data we have retrieved into BigQuery, we need to define:

Now we are going to send the data we have retrieved into BigQuery, we need to define:

  • Project_id: PROJECT_ID,
  • Dataset_id: DATASET_ID,
  • Table_id: TABLE_ID.

You will need to set each one of them.

In my code I will set default values, you will need to set each one of them.

Deploying using Google Cloud Run

Define you vars:

Google Cloud Run will allow us to deploy your code.

Copy this repo and run:

Test your code by running:

Replace `[CLOUD_RUN_SERVICE_URL]` with the URL of your Cloud Run service.

Make sure to replace `[PROJECT_ID]` and `[CLOUD_RUN_SERVICE_URL]` with your actual values in the README file.

You will need to grant access to the Google Cloud Run service account:

  • Access to Secret Manager
  • Editor access to BigQuery & JobUser.

Result will look like this:

In the next post I will dive into how to schedule this using Cloud Scheduler.

Happy coding,

See you in 2 weeks for the second part,

--

--