diff --git a/blog/posts/01-22-2025 - Windows Power Profiles Causing Notable CPU Performance Loss.md b/blog/posts/01-22-2025 - Windows Power Profiles Causing Notable CPU Performance Loss.md index b6bfb40..18bf491 100644 --- a/blog/posts/01-22-2025 - Windows Power Profiles Causing Notable CPU Performance Loss.md +++ b/blog/posts/01-22-2025 - Windows Power Profiles Causing Notable CPU Performance Loss.md @@ -23,7 +23,7 @@ The general idea is that Windows devices (Workstations & Servers) have what are When I learned of the above, I began to audit every Windows-based server and workstation (Physical and Virtual) in my homelab. The virtual machines seemed unaffected by this issue, but I still configured them to "**High Performance**" power profiles regardless. However, every single physical host (`VIRT-NODE-01`, `VIRT-NODE-02`, and `LAB-DRAAS-01`), all saw notable performance improvements ranging from 32% to 41%, on average going from 1.75GHz to 2.6GHz on the virtualization hosts, and 1.9GHz to 3.2GHz on the backup server. ## Final Thoughts -I am so upset that for years, no, decades, it never occured to me that the power profiles applied to server operating systems. I always just assumed they ran in "**High Performance**" power profiles all the time. I discovered I had non-trivial amounts of performance loss because of this simple checkbox setting in the OS. +I am so upset that for so many years, it never occured to me that the power profiles applied to server operating systems. I always just *assumed* they ran in "**High Performance**" power profiles all the time. I discovered I had non-trivial amounts of performance loss because of this simple checkbox setting in the OS. !!! success "Performance Improvements"