Systemd failed units
Linux Version Checking
RHEL / CENTOS
KSWAP0 is Eating My CPU
When kswap0 is taking 100% of the CPU and/or you're seeing errors like the following then you're running into a problems that were most likely fixed in a later kernel version.
The gist of what is going on is that you're running out of kernel memory and it can not allocate more pages for itself. This is dead giveaway that your vm.min_free_kbytes value is most likely too low. I've seen this happen most often on boxes that have long uptimes and use services in the kernel space eg. NFS backends.
If you look at the error you'll see order:0; that means that the kernel can not allocate more pages for it self. If it were order:1 that would mean that it could not allocate 2 pages, order:4 would be 16 page requests.
The mode parameter is a bit field that specifies the type of memory allocation that was requested. The value 0x20 indicates that the allocation was made from the kernel’s slab cache.
Using a tool like vmstat you can see what the memory limits are and where it's running out of memory. In this case it's most likely expired slabs taking up space that the kernel can't reclaim.
To fix this problem upgrading the kernel is normally the best option as this was a known bug in some kernel versions. However in the event this is not possible for whatever reasons run the following commands as they will expand the kernel memory to 1GB, allow the kernel to reclaim memory, and drop the caches. This should stop kswapd from eating the cpu and should allow the kernel to have access to more memory.
You may need to change 1 to 2 (or 3) on drop caches to make it more aggressive in freeing up memory. In this case using a value of 3 would make sense due to the request coming from the slab cache.
See vm.txt for more information
Normally we want vm.zone_reclaim_mode to be set at 0 (the default) for file servers, like NFS, because caching is more important for them. However in this case I set it to 1 so that I can make sure that it's getting enough memory. You'll need to change this on a case by case basis depending on your systems & data. You should consider changing this back to 0 once the system is stable so that services like NFS can take advantage of caching.
DNF / YUM Repo Mirroring, Syncing, Creation
Mirroring, Syncing, and Creating repos
sudo yum install yum-utils createrepo
sudo dnf install dnf-utils createrepo
To do the initial sync
This will sync whatever repoid (eg epel) you specify, only the x86_64 packages, will allow it to use the yum/dnf plugins, and sync it to your mirror dir. Omitting the repoid flag will sync ALL the repos on the system (/etc/yum.repos.d/)
Once the mirror is created you can sync it, verify it, and delete the old packages
If you need to do a quick sync you can add
--newest-only and it will just sync the newest packages. Adding the
--download-metadata flag might be needed for some repos. For example if you want to directly use the repo without running createrepo on it.
If you have a collection of RPMs that need to be put together in a repo, update a repo after running reposync, or fix a local mirror with bad metadata, use the
If you have a large repo you will want to setup a cache. This will improve your entire repo creation speed at the cost of some disk space.
If you're having problems you can use the
--workers flag to set the number workers. By default it will use as many workers as you have threads. Another way of speeding up the process is to remove the
--deltas flag to prevent it from generating deltas. It may also be prudent to set nice & ionice levels depending on your system. For non-ssd storage, and/or SAN storage, I typically recommend limiting workers to no more than half of the available threads, nice level of 15, and ionice class of 3. This will make sure the system has plenty of resources to handle large repos without impacting other services. For SSD backed storage, especially local SSDs, ionice typically is not needed unless your storage throughput is limited by something like LUKS.
Harding the boxes