My Morning IBM i Nightmare: Critical Storage Hit, and How I Fixed It by Deleting Performance Data
It’s February 5th, 2026, and quite surprisingly, my day started with a bang. I logged into my main system this morning around 8 AM, sipping my Yorkshire Tea, only to see those dreaded CPF0907 messages flooding QSYSOPR. The system was screaming about critical storage, with ASP usage spiked over 95%. Jobs were crawling, and I could feel the panic rising, thinking the whole box might grind to a halt. Turns out, the culprit was a massive buildup of performance data from Collection Services that I had neglected to clean up. If you’ve been there, you know the drill.
Today, I’ll share exactly what happened and how I clawed back that space, step by step, so you can avoid or fix the same mess on your IBM i setup.
What Went Down This Morning
I fired up WRKSYSSTS first thing, and OOPS, the system ASP was at 96% full. Performance was tanking, Project BOB taking forever to connect, and I knew I had to act fast before it escalated to a full shutdown. If your system is in the same boat, we are going to run through a few steps that I normally follow when the storage is running hot: Check Disk Usage for Libraries, Files, Spools and IFS.
(1) Check for large objects in all User libraries
The quick and easy technique is to look at library sizes (but this is not really accurate) so I prefer to find large objects on an IBM i system using SQL, you can use the rather spiffy QSYS2.OBJECT_STATISTICS table function which is detailed in this blog
SELECT
OBJNAME AS OBJECT,
OBJLONGSCHEMA AS LIBRARY,
OBJATTRIBUTE AS ATTRIBUTE,
OBJSIZE AS SIZE,
LAST_USED_TIMESTAMP AS LAST_DATE
FROM
TABLE(QSYS2.OBJECT_STATISTICS('*ALLUSR', 'ALL'))
ORDER BY
OBJSIZE DESC
FETCH FIRST 20 ROWS ONLY;
Scanning the SQL results, I saw several QPFRDATA files ballooned to gigabytes, packed with *MGTCOL objects from old collections. I had been running detailed metrics for some tuning experiments but forgot to set retention policies. Lesson learned the hard way.
I needed to purge without losing critical stuff, so I should of backed up a couple of recent collections first using SAVPFRCOL (you should too if you’re analyzing with Performance Tools or Navigator) but time was tight and I was facing a critical event.
“Screw those performance collections” I thought and then “must kill them, get it back under control before the engines blow and the di-lithium crystals explode”
How I Deleted the Performance Data and Reclaimed Space
I have *ALLOBJ, but if you don’t, get ownership or authority granted and obviously check with those in power before doing any of these steps.
IMPORTANT : Don’t delete Performance Data while a collector is running
If Collection Services is active, IBM i will just recreate the objects immediately.
So, the first thing we want to do is stop any active collectors:
ENDPFRCOL
If you’re using Navigator for i or Performance Data Investigator, make sure no collections are running. I didn’t do this important step. Blame it on “panic mode activated” and dived straight into attempting to clear the performance data library with:
CLRLIB QPFRDATA
What the clear library command does:
- Deletes all objects in QPFRDATA
- Does not delete the library itself
- Safe to run as long as no performance collectors are currently active
If you try to clear while collectors are running, you will see a library in use warning like this:

After realizing the error of my ways, I forced the collectors down because this was critical:
ENDPFRCOL FRCCOLEND(*YES)
give it a few seconds then tried the clear library command again:
CLRLIB LIB(QPFRDATA)
Now I can seem storage is down under 90% so I can breathe a little easier, but it’s time for some follow up cleanup:
(2) Nuke Legacy Libraries
My system had backup libraries of old code I have been tinkering with. After verifying with WRKLIBPDM it was safe, DLTLIB LIB(OLDSTUFF) freed up more.
(3) Purge old Spool Files
You can quickly delete tons of space for old spool files, or even take the nuclear option to delete ALL. Use with caution and on your own head be it:
(4) Purge Old Audit Journals
On my system, clearing the audit journals is a quick and easy space saver. Obviously you should only delete receivers that have been backed up.
You can delete them manually:
DLTJRNRCV JRNRCV(QSYS/QAUDJRNnnn)
DLTOPT(*IGNINQMSG)
Or delete a whole range using WRKJRNRCV option 4. I recommend automated cleanup using system retention. IBM i has built‑in retention controls:
CHGJRN JRN(QSYS/QAUDJRN)
MNGRCV(*SYSTEM)
DLTRCV(*YES)
This tells the system:
- Automatically delete receivers
- MNRCV(SYSTEM): The system manages the changing of journal receivers (this function is called system change-journal management). When an attached journal receiver reaches its size threshold, the system detaches the attached journal receiver and creates and attaches a new journal receiver
- DLTRCV(*YES): Keep storage under control. This specifies that the system deletes journal receivers when they are no longer needed, after they have been detached by system change-journal management or by a user-issued CHGJRN command.
(5) Ruthlessly Slaughter Old IFS Folders
I’ve written blogs previously about finding and deleting old bloated IFS folders – this is frequently a problem on my system. Do some research on your IBM-i System and zap those fatties!
Find the largest IFS objects:
select PATH_NAME,
DATA_SIZE
from table (
QSYS2.IFS_OBJECT_STATISTICS('/home', 'YES', '*ALLSTMF')
)
where DATA_SIZE > 1024
order by DATA_SIZE desc
FETCH FIRST 50 ROWS ONLY;
I deleted a bunch of DMP files and my VSCODE source /home/nicklitten/builds folders. These were very large after years of multiple development attempts. These are transient and contain all the source code from my many, and varied, compile attempts over the last 12 months. Captain Fat Finger killed the entire folder and subfolders.
Crisis Averted!
And…. YES… WRKSYSSTS showed ASP drop to 66%, and the system breathed easy. No more messages. My storage had come crunching back down to normal levels, for my small IBM-i Power System:
What next?
I’m left with a bad taste in my mouth, and would really like to automate this process.
- Now it’s time to write a CL program that not only cleans up QPFRDATA but also emails me a cleanup report?
- Schedule to run weekly with WRKJOBSCDE?
- Written in Control Language — objective is clean, modular, code which ends collectors, clears data, restarts services, and emails an executive summary. Maybe we can add an SQL object‑size analysis and then email a tidy report?
It’s the kind of thing you subscribers love: practical, modern, and a little cheeky. Sounds like fun.
Going to do this tomorrow morning, no emergencies withstanding, and then it will be published in the Programming Course under SYSTEM UTILITIES Module.




