How to Use the Google Search Console API

Google Search Console UI has its limitation to 1,000 rows of data, which is quite a headache when you need to manage a site with thousands of pages. 

It is time-consuming to filter pages for manual exports, and it is always insufficient to see all the data because of the sampling and row caps.

This guide documents how I use Search Console API to be more flexible in accessing Google Search Console data, so I can easily replicate the setup in another computer.

How to use Google Search Console API locally

Step 1.  Enable the Search Console API in Google Cloud

  1. Go to Google Cloud Console → create a new project (or use an existing one).
  2. In that project: APIs & Services → Library
  3. Search for and enable Google Search Console API

This connects your project to Search Console data.

Step 2. Configure OAuth consent screen

As I would like to run the script locally in my terminal, I need to get the credentials to authenticate.

In Google Cloud Console:

  1. From the left panel, choose panel APIs & Services and go to OAuth consent screen 
  2. In the “Audience” tab,  add an user
  3. Then go to Clients, and click “+ Create client”
  4. Choose “Desktop app” as the Application type as I’m running it on the local machine
  5. Download the credentials JSON and rename it to client_secret.json

⚠️ Note:
You will no longer be able to view or download the client secret once you close the dialog. Make sure you have downloaded the information and securely stored it. 

Step 3. Create a project folder on your computer

On your computer, create a folder to keep your script and results. Keeping everything in one folder makes it easier to manage authentication files and results as your project grows.

Tips:
It will get messy pretty fast if your result is saved to the same project folder, so I highly recommend creating a separate folder for the input (such as the list of keywords or pages) and results.  

mkdir gsc-api-demo //create a new folder 

This is my current project structure:

gsc-api/

├── client_secret.json

├── token.json                # auto-created after first OAuth run

├── check_indexing.py  # script

├── input/

│   └── urls.txt

└── results/

    └── 2026-01-22/  # organize by date

        └── index_status.csv

Step 4. local environment + install packages

Open the terminal to change your designated folder and install Python and a virtual environment

cd gsc-api-demo //change directory to the new folder

python3 -m venv .venv

source .venv/bin/activate   # macOS/Linux

# .venv\Scripts\activate    # Windows PowerShell

pip install --upgrade pip

pip install google-api-python-client google-auth google-auth-oauthlib google-auth-httplib2

Step 5. Create the script

Create a .py file for your script in the project folder. Feel free to use ChatGPT or Claude for help when creating

In my Github repo, currently I have the following script:

  • page_indexing.py: Check page indexing status for a list of URLs using the URL Inspection API
  • keyword_performance.py: Check performance data for a list of keywords during a specified date range
  • page_performance_comparison.py: Compare page performance vs previous period

Step 6. Run the script

In the terminal, run this command line:

python gsc_query.py  # replace with the name of your python script

The first run will open a browser window asking you to authorize. After that, it saves token.json so future runs are non-interactive.

Cheatsheets

Activate the virtual environment

If you quit the terminal session, you need to reactivate the virtual environment again. Otherwise, you will see ing warning about “zsh: command not found: python”

Use cd command to move to your project directly and use the following line to activate the virtual environment

source .venv/bin/activate

Split GSC properties if you need more Search Console API quotas

The Search Console API enforces per-property quotas. This means your daily and per-minute limits are applied to each Search Console property separately, not to your Google Cloud project as a whole.

If your site is set up as a single Domain property, all API requests hit the same quota bucket. Large sites, international sites, or high-frequency reporting setups can run into limits quickly.

Instead, you can create additional URL-prefix properties, for example: 

Each of these properties gets its own API quota.

Domain vs URL-Prefix Properties

One of the most common causes of “no data” errors is because calling the wrong property.

The siteUrl must exactly match the GSC property you have:

Access denied

If the scope changed, then you may need to delete the cached token and re-authenticate:

rm -f token.json

Add a .gitignore to avoid committing secrets

As the client_secrets and token are personal, they shouldn’t be committed to the Github repository. Make sure to create a .gitnore file:

# Python
.venv/
__pycache__/
*.pyc

# Google API secrets
client_secret.json
token.json

# Local inputs / outputs
input/
results/

# OS / editor noise
.DS_Store
.vscode/
.idea/

Conclusion

With a local setup, you can iterate quickly, export raw results, and adapt scripts to your own workflows before moving anything to a server or scheduled pipeline.

Once you’re comfortable pulling data reliably from your machine, the same approach can be extended to automation, cloud jobs, or larger reporting systems with minimal changes.

Aubrey Yung

Aubrey Yung

Aubrey is an SEO Manager and Schema Markup Consultant with years of B2B and B2C marketing experience. Outside of work, she loves traveling and learning languages.