I think of myself as a full-contact English user. This means that sometimes English gets the better of me and I say something that makes no damn sense, requires that you be me to understand it, or involves using a word in a distinctly nonstandard way. The latter is the one I'm wrestling with at the moment: I'm glumly coming to accept that "hive mind" is one of those constructions where if I use it without a qualifier, verbal footnote, or something clarificatory, people will take a different meaning from it than what I intend. So here's my understanding of hive minds.
Here's where the semantic shear starts: when I say "hive mind," a lot of people will think "oh, like ants or bees; there's a queen, someone in charge, at the center, and that's where all the decisions come from, most of the other members of the hive mind are drones with no initiative who just do what they're told." That's a problematic response. For one thing, ants and bees don't actually work that way. For another, there are three groups that you could be talking about when you say "hive mind": hierarchical, rules-based, and distributed. The response that I gave is describing a hierarchical hive mind.
Hierarchical hive minds are, basically, the Borg. Frankly, they're monstrously impractical. They scale up poorly, they have a big single-point-of-failure problem, and they use resources wastefully. I'm bashing them because those three problems are exactly the sort of thing you'd want to use a hive mind setup to solve. A hive mind setup where those are still the main problems is fairly unappealing. That said, we're embedded in a hierarchical society, and the Borg and other hierarchical hive mind reflect prevailing ideas about how well hierarchy works. As someone with an enduring suspicion of hierarchy, I suppose I'm glad that hierarchical hive minds are usually villains, but I'm also annoyed and surly that they get portrayed as the only way to do such a thing. They seem to have this function of reflecting dystopias of extreme inequality, where only a few people's thoughts actually matter and everyone else is a sort of walking husk. The relevance to 20th- and 21st-century America is incontestable. Hierachical hive minds are about control, just like regular hierarchies. I doubt you've seen one of either that would put up with you just saying "nah, this isn't working for me" and walking away.
Meanwhile, for the actual situation with ants and bees, a great place to start is
Deborah Gordon's TED talk (her talk is clearly intended as a 90-minute class lecture instead of a 20-minute talk, though, and I would love to hear it in that form). One thing jumps out of that presentation: ants are absolutely non-hierarchical. This should be obvious, in one sense. Ants, bees, and termites are very short on resources with which to manage a hierarchy. The biological substrate that they're working with doesn't have enough processing power to think in the way that we humans do about the projects that they undertake. So their hive minds are built using very, very simple parts, but still achieve impressive and complex results. That's why they're one of the canonical examples of emergent properties and swarm intelligence, ideas that say that you can get very complex results out of very simple parts. Exactly what results come from what rules and what substrates gets you into cellular automata and similar fields. The point that I want to take from this, though, is that the thinking we do isn't the only way to achieve complex results, and for some situations it's not the best way to achieve complex results.
This comes back to why I bash hierarchical hive minds: they stem, in large part, from people with archaic ideas about the value and necessity of hierarchy, the Great Chain of Being for example. Those people would look at an ant colony, at a beehive, at a termite mound, ask themselves "how does that work?" and come up with an answer derived from their model of their own world: hierarchical, with someone all-powerful at the top. And we know now that that's pretty much the opposite of how ants and bees work. But that model of how bees and ants work gets transplanted into speculative fiction, whose business is to ask "what if?" as the question "what if humans worked the way ants and bees do?" And one answer to that is the Borg. Well, now we know the problem with that. An ant colony is a way to make sure that ant genes survive, and ant genes have figured out that from a bunch of simple parts, you can make a machine that very efficiently spreads ant genes. An individual ant is a very small resource investment, so you can throw a lot of ants at any given task that the hive has to deal with. Having a queen is an efficient way for them to do reproduction, so they do that, but having a queen who thinks and gives orders and makes plans would be a huge resource investment, so they don't do that.
This is why I say that hierarchical hive minds are fragile, clumsy, and wasteful. Having a Borg Queen figure is fragile because you introduce a single point of failure, and we should all be familiar with the idea that that's a Bad Thing and hurts reliability a lot. Having a figure like that is also clumsy because that means that a ton of processing has to go on in one place, and it scales up poorly, which in turn exacerbates your reliability problem because you'll suffer more and more from bottlenecks and minor issues at that central point. Finally, it's wasteful. The biological substrate of an insect colony is made up of very simple, very cheap parts. The substrate of your average Borg or Borg-clone hive mind is the opposite: it's humans, usually with cybernetic enhancements, who are neither simple nor cheap compared to insects. So it's wasteful because you could be doing a lot more with individual components than just having them obey orders, even if they're obeying smartly (which isn't a consistent feature of fictional hierarchical hive minds anyhow).
So if you've got a bunch of enhanced humans and you want a hive-mind type structure, you want a distributed hive mind. You'd recognize one by the way it speeds up and takes advantage of human tendencies by leveraging high-speed communication as much as possible. Members would hold a lot of state, but wouldn't hold the whole groupmind. As a node, you'd retain autonomy, but duck into nonautonomous or gestalt mode for tasks that need that. Cellular-automata-style algorithms will tell you when a piece of information that you have needs to be mirrored to other nodes. You'll spend some of your thinking time being a traffic director and curator - dealing with requests from other nodes by answering, forwarding, or modifying and forwarding. Unlike hierarchical and rules-based hive minds, you can leave one and you don't have to be born into one. It's not a lifetime commitment. I see distributed hive-minds as a tool, not a lifestyle: you join one because you're interested in getting a specific task done, and when it's done, the groupmind dissolves and the members go find something else to do with their time.
It should strike you that we're not actually that far from distributed hive minds. The way that high-prestige knowledge workers (for instance,
cryptodragon) operate is really pretty similar. The networked, distributed, nonhierarchical approach that I like here is what's emerging as a good way to handle the information-processing tasks that such people have to deal with. But those tasks are embedded in a hierarchical framework, present-day consumer capitalism, that's pretty toxic. So what I'd really like to see is a cheapening of the nonbiological parts that let us participate in groupminds like that in order to spread around the benefits of being able to think in groups and access expertise and knowledge with minimal burden on the knowledge-giver's part. This will help us ease the problems that over-reliance on hierarchy has created in the last thousand years or so. It's not a panacea, but it could be a tremendously useful tool.
This also shades into one of my pet theories about AI: I personally think that for a lot of applications, AI and UI are going to become the same thing. This, too, is something that we already have a significant amount of progress on. Simulating an entire brain, the hope of strong AI, is hard. Simulating a nonthinking body part and hooking it into an existing nervous system, on the other hand, is something we can already do. Prototype prosthetics are already very close to this. So if you were going to build a cybernetic part that adds a new capability to the human body, you'd want to just leverage the brain's plasticity, you'd want to make that part connect to nerves that the brain already knows how to deal with and train it to send different impulses to those nerves. Thus, the problem becomes training a human instead of programming an AI. The more sophisticated the capability you're adding, the more processing has to go in the device itself, though, and that's where I think AI is going to go - into faking nonexistent body parts in a way that convinces the brain that they're usable, and then into faking "body parts" that are actually abstract capabilities such as, just for instance, sharing thoughts with nearby people and developing a consensus on some question without speaking and at high speed. Or air-traffic-control (forget Trinity and Neo, I wanted to be one of Zion's air-traffic control staff). There are a lot of peripherals that you can hook onto our nervous systems once you learn the right tricks, is my point, and I think that that's probably a more fruitful short- and medium-term direction for AI than trying to outright simulate whole brains.
Turning back to hive minds, this whole shebang is generally what I have in mind when I'm talking about them: distributed minds, egalitarian gestalts, and highly leveraged communications-oriented cybernetics.
Also I think Google will probably do the first good implementation and take everyone by surprise, but that's just vile speculation.