I’ve argued for a while now that we’re at or near a data tipping point beyond which lies a new world where companies analyze many fundamentally new types of data in real-time and use it to make business decisions that were previously impossible.
But after all tipping points, there are winners and losers. I believe in this case, the winners will share one really important quality: a deliberate choice in favor of leveraging open source technologies at the heart of their modern data architecture.
If you think about it, isn’t the Internet just a giant mesh network? If we deviate slightly from the standard definition that requires that all nodes to assist in data distribution, the answer is a resounding yes.
However, traditional network technologies and the vendor-provided hardware required to run them is often far too expensive to deploy without deep pockets. Open source software aims to solve these problems and enable network connectivity to marginalized groups all around the world.
“The biggest challenge for supercomputing is the demand to compress time,” says Jerry Cuomo, vice president of Blockchain for Business at IBM. “Business processes must now be completed at a significantly faster pace than before. The result is that the demand for computing power is increasing exponentially.”
The peer-to-peer nature of the blockchain and distributed ledgers will also help move computation closer to where the data is being generated, and avoid bottleneck round-trips to cloud servers.
The internet has become a feature of our lives around which they revolve, whether we like it or not. Unless you happen to live in the middle of nowhere and have absolutely no access to media, not a day goes by that we don’t encounter it. It has truly changed the way we live our lives and mostly it is for the better. Everything in the past sixteen years has suddenly become so much more convenient as we have discovered even more means of utilizing the internet’s capabilities.
Back in 2007-2008 we did research relevant to your thinking on Applied Collective Intelligence, at the University of Oxford. We focused on “distributed problem-solving networks” that included looking at film production in a distributed fashion to include a lot of open source projects:
William Dutton: Distributed Problem-Solving Networks (pdf, 194kb). Arguing that DSPNs reconfigure information and communication flows in ways that can enhance the communicative power of networked individuals that span geographical and organizational boundaries in ways that create challenges for firms and organizations that seek to capture the value of DSPNs.
Exemplary of Web 2.0 developments, some problem-solving networks are anchored in user-generated content, such as Sermo: a community-based knowledge ecosystem for licensed physicians in the USA. Physicians can ask and answer questions and surveys posed by other doctors or pharmaceutical firms or other paying problem-holders. The Sermo community, of over 50,000 doctors, sorts through conversations and identifies interesting health trends, cases, and other novel health insights for the benefit of multiple stakeholders.
2. Seriosity: Addressing the Challenges of Limited Attention Spans
Human beings like to earn points, as demonstrated by Seriosity, a creative use of games as way to create an incentive for individuals to pay closer attention to their use of email and solve their problems with information overload. The system enables individuals to use a virtual currency attached to email, to simulate the redistribution of resources in ways that will lead them to be more strategic about the priority messages they send, read, open from co-works. The Seriosity approach has interesting side-effects, to include individuals exchanging the virtual currency for real-world tasks and favors in the workplace.
3. The Performance of Distributed News Aggregators
Originating as a study of the online news aggregator Digg, which relies on user ratings to determine which articles to put on the frontpage, this study evolved into a comprehensive survey of the online news aggregator space. This case study explores the conditions under which crowds are smart, analyzes the bias of several modes of information aggregation, and shows the risk of mob behaviour.
4. Information Markets: Feasibility and Performance
The performance of prediction markets has been one driving force behind the renewed attention on distributed problem solving. This case reviewed the feasibility and performance of prediction or information markets, discussed some apparently successful applications, considered likely limitations of information markets, and identified important areas for future research.
5. The ATLAS Collaboration: A Distributed Problem-Solving Network in Big Science
The nature of the problem tackled by the ATLAS collaboration – the creation of a radically innovative particle detector experiment – makes ATLAS an exceptional case for studying DPSNs. The problem solving is distributed across multiple groups of problem solvers comprising 2000 scientists in 165 working groups across the globe. Similarly, the engineering, construction and installation of the many components is distributed across this collaborative network. The initially surprising finding of the case study is that this joint innovation effort succeeded despite breaking with most rules of traditional project management. Philipp Tuertscher analyzes what it took to make such a loosely structured organization work, and raises the question if such structure was even required to develop a complex technological system like the ATLAS detector?
The case study of the Mozilla project is focused on the organization of quality control and quality assurance in a distributed innovation environment, and focuses on the coordination of the detection and correction of operating defects (‘bugs’) in Mozilla’s Firefox web-browser. Analyzing two samples of bugs drawn from the 40,000 or so that have resulted in a change to the Firefox code base, the case study finds among others that bug treatment behavior in the project was not temporal stable, that bug reports from ‘outsiders’ took longer to find a successful resolution and were more likely to remain ‘un-fixed and that factors such as the objective technical complexity of the bug-patching problem, and the level of effort devoted to the contextualisation the reported defect played significant roles determining the speed with which a bug was typically fixed.
7. Wikipedia as a Distributed Problem Solving Network
Wikipedia, the free online encyclopaedia put together by volunteers, is a prime example of a distributed problem-solving network with a global array of contributors creating a resource that has been compared to leading encyclopedia. The study focused on efforts to maintain the quality of Wikipedia entries and in particular of the use of tagging to signal the need for improvement in entries of Simple Wikipedia.
8. Distributed film production: Artistic experimentation or feasible alternative? The case of ‘A Swarm of Angels’
A Swarm of Angels was selected as a case study of open content film production. The project based in Brighton, England, but it extended the open source model to movie making in ways that could bring distributed collaborators into the film project. The case study highlighted the existence of a core group of contributors and a periphery of silent supporters, which both play an important role in the project’s performance.
Max Loubser: Governance Structures in Distributed Problem Solving Networks (pdf, 110kb). Evaluating the role of governance in DPSNs and comparing the governance structures across the case studies. It proposes a basic taxonomy separating DPSNs in which problem solvers can modify the way the problem solving platform works from those where the intermediaries set the rules.
Tim Berners-Lee, the creator of the World Wide Web (WWW), is exploring the idea of a new decentralised version of the web, along with other internet scientists, report New York Times. The Decentralized Web Summit was held from June 8-9 in San Francisco, and envisions a web which is not controlled by corporations or governments across the world.
My recent interview with Andreas M. Antonopoulos information security expert and tech-entrepreneur, and the author of “Mastering Bitcoin”.
2. Bitcoin is simultaneously a currency, a financial asset, and a technology protocol. Underlying bitcoin is the blockchain, a distributed public ledger. For people who are not intimately familiar with either bitcoin or blockchain, could you briefly explain what these technologies consist of?
The blockchain is a distributed database. The magic of bitcoin comes from sharing control over that distributed database through a consensus mechanism called “Proof of Work”. This ensures that no one is in control of bitcoin and that it operates based on predictable rules.
As even Tim Berners-Lee has recognised, the volume of data with which we’re being bombarded prevents us from engaging in genuine debate
On average, we check our smartphones 200 times a day – for emails, alerts, tweets or text messages. That’s before using any one of our phone’s multiple applications. It is a degree of connectivity to one another, and the world beyond, that is unparalleled. And it’s difficult to imagine life without it – to be so connected is to have access to instant knowledge, instant exchange, instant laughter and anger.