Last week’s note covered four SEO metrics that most people are reading at the wrong level. This week is the companion piece: four concepts that people spend real time and energy optimizing for but probably shouldn’t be.
The difference between the two notes is this. Last week’s metrics have legitimate uses when applied correctly. This week’s concepts are either outdated, misunderstood at a fundamental level, or solving a problem that doesn’t exist anymore.
Keyword Density
People still ask this question. “What percentage should my keyword appear in the content?” Five percent? Three percent? Two?
There is no target percentage. There hasn’t been one for a very long time.
Google stopped relying on literal keyword matching years ago. It understands entities, relationships between entities, synonyms, and meaning. When you search “how to fix a leaking faucet,” Google doesn’t count how many times each ranking page says “leaking faucet.” It understands that the page is about plumbing repair, that a faucet is a fixture, that a leak is a malfunction, and that the searcher wants step-by-step instructions. It matches intent and topical coverage, not word frequency.
Yet SEO plugins still flag keyword density. Beginners see a warning that their keyword only appears 1.2% of the time and start cramming it into sentences where it doesn’t belong. The result is content that reads awkwardly, and awkward content doesn’t help anyone.
If you’re writing naturally about a topic and covering the relevant entities with appropriate depth, your target keyword will appear at a natural frequency. You don’t need to count it. If you’re worried about whether Google understands what your page is about, the answer is almost never “use the keyword more times.” It’s “cover the topic more thoroughly.” Those are very different things.
PageSpeed Insights Score
People chase a perfect 100 in Google’s PageSpeed Insights tool like it’s a grade. It’s not.
The Lighthouse score you see in PageSpeed Insights is a lab-based diagnostic tool. It runs a simulated test of your page under controlled conditions and produces a score. That score is useful for identifying specific performance issues: images that aren’t compressed, JavaScript that blocks rendering, layout shifts during load. It’s a debugging tool.
What Google actually uses as a ranking signal (and a very weak one at that) is Core Web Vitals field data. That’s the real-world performance data collected from actual users visiting your site through Chrome. It measures three things: how fast the largest visible element loads (LCP), how quickly the page responds to interaction (INP), and how much the layout shifts unexpectedly during load (CLS). These are measured from real user sessions, not from a simulated lab test.
A site can score 65 in Lighthouse but have perfectly good Core Web Vitals because real users on real connections experience the site just fine. A site can score 98 in Lighthouse but have poor field data because the lab simulation doesn’t reflect how the site actually performs for its audience.
The Lighthouse score and Core Web Vitals field data are related but not the same thing. If you’re going to track page speed as part of your SEO work, look at the Core Web Vitals report in Google Search Console or the field data section in PageSpeed Insights (labeled “Discover what your real users are experiencing”). That’s what Google uses. The number at the top of the screen is for diagnosing problems, not for measuring ranking impact.
Word Count
The idea that longer content ranks better refuses to die. It comes from correlation studies that found pages ranking in the top positions tended to have more words. The conclusion people drew was that writing longer pages would improve rankings.
The problem is that correlation isn’t causation, and the actual cause is straightforward. Longer pages tend to rank better because they tend to cover more entities, answer more questions, address more aspects of the search intent, and provide more information gain. Those things help with rankings. The word count itself does nothing. Google has said this explicitly. There is no minimum word count for ranking, and adding words doesn’t help unless those words add substance.
A 3,000-word page that pads its length with filler, restated points, and generic advice performs worse than a 1,200-word page that covers the topic with depth and specificity. If you read the information gain note, this should click. Google can measure whether a page contributes novel information relative to other pages on the same topic. More words is not more information gain. More novel, specific information is more information gain, regardless of how many words it takes to deliver it.
The practical consequence of chasing word count is that people dilute their content. They add paragraphs restating things they already said. They include sections on tangentially related topics just to hit a number. Every paragraph that restates what the other ranking pages say, or what your own page already said, dilutes the ratio of useful-to-redundant content. That’s the opposite of what you want.
Write until you’ve covered the topic thoroughly. Stop when you’ve said what needs to be said. If that’s 800 words, publish 800 words. If it’s 2,500 words, publish 2,500 words. The number is an outcome of thorough coverage, not a target to aim for.
“Toxic” Links and Disavow Obsession
Third-party SEO tools have a feature that scans your backlink profile and flags links as “toxic.” The flags show up in red. There’s usually a score. It feels urgent. People spend hours compiling disavow files to submit to Google, rejecting links from sites they’ve never heard of.
Most of the time, this is wasted effort.
Google has said repeatedly that its algorithms are very good at identifying and ignoring low-quality links on their own. John Mueller has addressed this directly more than once. The system doesn’t need your help to figure out that a spammy comment link on a random blog isn’t a genuine endorsement of your site. Google just ignores it.
The disavow tool exists for specific situations. If you’ve received a manual penalty related to unnatural links, you may need to disavow the links that caused it. If you previously participated in a paid link scheme and want to clean it up, the disavow tool is appropriate. These are deliberate, known problems where you’re telling Google “I know about these specific links and I want you to ignore them.”
What the disavow tool is not for is going through every link a third-party tool paints red and rejecting it preemptively. Those tools use their own proprietary scoring to determine what’s “toxic.” Their criteria don’t necessarily match what Google considers problematic. A link from a low-DA site with a foreign-language domain might look suspicious to a tool’s algorithm but be a perfectly legitimate link from a real site in another country. Disavowing it doesn’t help you. In some cases, people accidentally disavow links that were actually passing value to their site.
The risk isn’t just wasted time. It’s the possibility of removing links that were helping. If you haven’t received a manual penalty and you aren’t cleaning up a link scheme you knowingly participated in, you almost certainly don’t need to touch the disavow tool. Let Google’s algorithms handle the noise. They’ve been doing it for years.
If anything, use a toxic link designation as a notification that this is a link you may want to take a closer look at. Nothing more.
The Pattern Across Both Notes
Last week’s note and this one share the same underlying problem. People anchor to a number or a concept because it feels concrete and measurable, and they optimize for it without asking whether it actually connects to how search engines work.
The fix is always the same question. Does this thing I’m spending time on directly influence how Google evaluates my site? If the answer is no, or if the answer is “only in a very specific context that doesn’t apply to what I’m doing,” that time is better spent elsewhere.
There’s no shortage of things in SEO that actually matter and actually respond to effort. Spend your time there.


