Regardless of the moral arguments you put forward, which I will not
comment on, it seems like this idea would never be implemented because
none of the Tor developers have a desire to implement such a dangerous
feature.

I can argue that the lack of it is also dangerous, actually. It amounts to a form of "pick your poison".

Consider exit policies. Would Tor be better off if all relays were also required to exit all traffic? I think it's obvious the answer is no because there are currently ~5800 relays and ~1000 exits according to the lists from torstatus.blutmagie.de, so most Tor relay operators choose not to exit. If they didn't have that choice, there'd almost certainly be far fewer relays. Allowing relays to contribute as much as they feel comfortable with (or that their ISP feels comfortable with) helps the project a lot.

Tor is not a large network. It's a niche product that routinely sacrifices usability for better anonymity, and as a result is politically vulnerable. I don't want Tor to be vulnerable, I think it's a useful piece of infrastructure that will be critical for improving the surveillance situation. Regardless, "anonymity loves company" and Tor has little. By demanding everyone  who takes part support all uses of Tor simultaneously, including the obviously bad ones, you ensure some people will decide not to do so, reducing the company you have and thus making it easier for politicians/regulators/others to target the network.

The above argument is general - it would also apply to giving end users different tradeoffs in the TBB, for example, a mode designed for pseudonymity rather than anonymity that doesn't clear cookies at the end of the session. Then it'd be more convenient for users who don't mind if the services they use can correlate data across their chosen username, they just want a hidden IP address. Same logic applies - the more people use Tor, the safer it is.

It may appear that because Tor has been around for some years and has not encountered any real political resistance that it will always be like this.  Unfortunately I don't think that's a safe assumption, at least not any more. Strong end to end crypto apps that actually achieve viral growth and large social impact are vanishingly rare. Skype was one example until they got forced to undo it by introducing back doors. The Silk Road was another. The combination of Bitcoin and Tor is very powerful. We see this not only with black markets but also Cryptolocker, which appears to be the "perfect crime" (there are no obvious fixes). So times have changed and the risk of Tor coming to the attention of TPTB is much higher now.

The best fixes for this are:
  1. Allow people to explicitly take action against abuse of their own nodes, so they have a plausible answer when being visited individually.

  2. Grow usage and size as much/as fast as possible, to maximise democratic immunity. Uber is a case study of this strategy right now.
The absence of (1) means it'll be much more tempting for governments to decide that all Tor users should be treated as a group.
 
Further, why do you think such infrastructure would be remotely
successful in stopping botnets from using the Tor network? A botnet
could just generate a thousand hidden service keys and cycle through
them.

That's a technique that's been used with regular DNS, and beaten before (DGA). The bot gets reverse engineered to find the iteration function and the domain names/keys eventually get sinkholed. There are counter measures and counter-countermeasures, as always.

But yes, some types of abusers are harder to deal with than others, that's for sure. If it helps, s/botnet/ransomware/. The same arguments apply. I don't want to dwell on just botnet controllers.

With respect to your specific counter-arguments:
 
So, this would be:

      * Socially damaging, because it would fly in the face of Tor's
        anti-censorship messaging

That seems like a risky argument to me - it's too easy for someone to flip it around by pointing out all the extremely nasty and socially damaging services that Tor currently protects. If you're going to talk about social damage you need answers for why HS policies would be more damaging than those things.

Also, the Tor home page doesn't prominently mention anti-censorship anywhere, it talks about preserving privacy. If you wanted to build a system that's primarily about resisting censorship of data it would look more like Freenet than hidden services (which can be censored in a way using DoS attacks and the like).
 
      * Technically damaging, because it would enable the worst class of
        attacks by allowing attackers to pick arbitrary introduction
        points

Who are the attackers, in this case, and how do they force a selection of introduction points? Let's say Snowden sets up a blog as a hidden service. It appears in nobodies policies, because everyone agrees that this is a website worth hiding.

If the attacker is the NSA, what do they do next?
 
      * Not even technically helpful against other content, because they
        can change addresses faster than volunteers maintaining lists of
        all the CP onionsites can do the detective work (which you
        assume people will want to do, and do rapidly enough that this
        will be useful)

I didn't assume that, actually, I assumed that being able to set policies over the use of their own bandwidth would encourage people to contribute more - seems a safe assumption. You don't need perfection to achieve that outcome.

But regardless, changing an onion address is no different to changing a website address. It's not sufficient just to change it. Your visitors have to know what the new address is. You're an intelligent guy so I'm sure you see why this matters.