20

Ok before I start I realise this is on the fringe of on-topic (I have read the Questions help for this site), particularly as this is not a real-world problem. However:

  1. I cannot find anything relevant on Google
  2. From a purist point of view surely it must fall within Computer Science?

In any case, if I have overstepped a boundary then I apologise and welcome the Closure as I am an avid user of other SE sites so I understand the issues.

Caveats aside, here it is: I have long wondered if it would be possible to build a functioning computing system, using humans as discrete logic components, to solve problems that individual humans could not solve in a practical timesscale. For example, imagine a number of humans stranded on an island without any machines, that needed to crunch some complex numbers to escape.

I imagine arranging people so that they receive inputs from other groups within the system, make simple decisions (perhaps binary decisions, perhaps not) and pass the outputs to other groups.

Then I imagine some kind of programming language could be developed to control the data and computation flow and the language could be used to solve complex problems without individuals understanding the overall problem.

So I guess the above is not an answerable question- but does anyone know of any research, books, papers or whatever on what it would take to achieve, what kinds of of problems could be addressed and potentially solved, what kind of control language could be deployed and how the architecture could be scaled up to handled more complex problems?

I suppose, in essence, I am looking for anything on "idealised" atomic (as in self contained) and standard computing units that could be arranged at will- I am just thinking in human terms.

I find the idea fascinating and alluring. I'd love to try it out one day and see what performance could be achieved! Sorry for the tags I have used, as I was searching the tags here I quickly became aware I have no idea of the correct terminology for what I am thinking, though I am sure it exists within the field...

FrankW
  • 6,609
  • 4
  • 27
  • 42
Marv Mills
  • 301
  • 2
  • 6

5 Answers5

20

Actually, until the 1950s the word computer was used to refer to a human who did arithmetic calculations. One (or more) of Richard Feynman's (many) autobiographies contains anecdotes about his time on the Manhattan project, where he ran the group of human computers. For arranging a group of humans to perform a complex computation they wouldn't start with discrete logic components, but rather have each human perform multiple arithmetic operations and then coordinate their results (along with some error checking.) How to organize these kinds of large computations may be covered in numerical methods books from the 1940s or early 1950s.

The first version of the Logic Theorist by Newell, Simon and Shaw was simulated using humans in 1956 (less expensive than computer time). They later won a Turing Award for basic contributions to AI, the psychology of human cognition and list processing (the Logic Theorist may have been the first program to use linked-lists to represent data structures.) And the experience influenced Simon's later ideas on emergent behavior (see his Sciences of the Artificial.)

As pointed out in the comments and other answers, there is now an emerging discipline of Human-Based Computation, where various incentives are used to get humans to do parts of a larger calculation, where those parts make good use of human problem solving or pattern recognition. One example of this is reCaptcha, where users need to enter two words to prove that they are not a bot, one a distorted image that is used for the actual "proof" and the second an actual word from a scanned book, which is used to produce a digitized version of the book. Another example is the Amazon Mechanical Turk where a business can outsource "microtasks" to human workers for small sums of money. The mechanical turk has been used, for example, to collect annotations on 250,000 images for image processing research. The key seems to be breaking the problem down into a pile of independent work items, with significant amounts of redundancy used to reduce errors. (E.g., you assign the same work item to 2 different humans, and then if they provide conflicting answers you assign the work item to a third human to resolve the difference.)

Wandering Logic
  • 17,863
  • 1
  • 46
  • 87
1

I would think that, in a way, current proof development technology, and possibly associated program synthesis techniques, rely on a symbiosis between humans and computers, wich is not far removed from the example of the Manhattan project human computers. The computer provides some steps of the reasonning and does all the tedious, though difficult book-keeping, while humans provide the Aha steps that the computer cannot (yet?) find.

I remember an old program transformation system, where transformations were programmed in a specific programming language. When the program identified a situation it could not handle, it could pass control to the user who was supposed to do whatever was needed by hand, with intepreted commands, and then pass control back to the transformation program.

babou
  • 19,645
  • 43
  • 77
1

as other answers point out, humans were used as computers before hardware-based computing (mainly for calculating large mathematical tables published as volumes), and that is the original literal meaning of the word "computer". in the history of computing, the trend has been exactly in the opposite direction away from human computing to hardware based computing because humans are essentially unreliable (and increasingly unnecessary) for nearly-mechanical tasks.

however, social networking has given rise to new forms of human-based computing aka "collective intelligence" (CI). there are many examples. for example on stackexchange the question "ratings" (positive minus negative votes) and "hot questions" are based on CI of stackexchange users (expressed via voting). algorithms to find similar items on eg Amazon based on user behavior are related to CI. similar algorithms run on Netflix to find similar movies based on user preferences (and user-submitted ratings).

google Pagerank is designed to work based on the CI encoded in link patterns (linking on web pages is ultimately based on human choices). Facebook is introducing a new Graph Search algorithm also tightly coupled with CI. note that even what friends a person has, as expressed in social networks, is related to CI.

examples of concepts related to human computing:

so, as far as long range trend, humans as digital or mechanical computers has been in decline for the entire 20th century continuing into the 21st, but collective intelligence is very much on the rise as well as cheap computing and computing clusters fueled by Moores law.

vzn
  • 11,162
  • 1
  • 28
  • 52
0

I don’t know if anyone else has mentioned this in the ten years this question’s been up but you should read The Three Body problem

Anshul
  • 1
-1

This is a real world challenge and closely relates to workflows. The idea of having a workflow is to have a queue or ordered or un-ordered list of tasks that people or computers have to pick up at any time to complete a bigger process in which they may or may not be concerned or know about at all.

In effect you will have a machine that can make something and usually this is the main challenge and/or goal for an entrepreneur, mostly because money can be made if you can automate something and then step away from it (like a machine).

The inherent problem with using humans is that they make more mistakes or get bored with the work. Basically, this is also the reason for entrepreneurs to try to replace the real people parts with mechanical or computers parts.

jwize
  • 1