Originally published in the Scholarly Kitchen on August 6, 2015

system-63768__180

The digital era has increasingly led to distributed networks and a move away from centralized locations for both people and data. As our ability to communicate, participate, and share has increased, our ecosystem has changed. 

This month we asked the Chefs: How has the move to distributed networks impacted scholarly publishing?

Joe Esposito: Rather than think about this in the present perfect tense, I would rather project into the future, where things get very different and probably uncomfortable for many readers of this blog. One of the structural advantages of the kind of publishing that PLOS ONE and its countless imitators does is that it is aligned with the underlying network upon which we all operate. Many of the things that don’t work very well today (e.g., post-publication peer review) have the advantage of swimming with the current of technology. Networks make expression more conversation-like, less fixed. We are at the very early beginning of a world without versions of record, fixed DOIs, and clear precedence. All this is anathema to established publishers, many of which retain me to tell them that they are doomed. It’s a great way to make a living. The distributed nature of networks undermines broad authority structures (e.g., brands, conventional practices such as peer review) and replaces them with a fitful pluralism. It is useful (not correct–what would that be?–but useful) to think of post-modernism not as a cultural paradigm but as a technological prediction.

Phill Jones: The move to cloud computing for productivity apps has had a direct effect on me personally by enabling me to have a job with a London based company while living in Edinburgh. In fact, the team in which I work is both globally distributed and closely knit thanks to technologies that let us send messages instantly, meet face to face, and collaborate on documents in the cloud. It’s amazing to me that I can do my part to contribute from anywhere in the world, so long as I have a device and occasional internet access. (Incidentally, I’m writing this on my phone while on a delayed plane, sitting on the tarmac in Newark.)

In terms of the products and services we offer, frankly, we haven’t begun to scratch the surface of what could be done. The question is: ‘How will it impact scholarly publishing?’

Rick Anderson: This is obviously a question that could be answered in any number of equally-valid ways, but the facet of this move that I personally find most interesting is the way in which it has turned publishers from widget-producers into service providers. Now, to be clear, publishers have always provided services to authors, but before the 1990s they mostly provided widgets (printed books or journal issues) to readers. They provided services to authors in return for publishing rights, and provided widgets to readers in return for money. Today, of course, it’s nowhere near that simple: in addition to providing authors the usual services under the usual terms, a growing number of publishers offer authors the option of paying to make their work available to the world for free — and some publishers, of course, operate exclusively on that model. From the reader’s side, it’s decreasingly common that they simply pay money for a physical object into which information is encoded. Instead, readers increasingly enter into service agreements with publishers whereby they are given access to hosted content and the publisher assumes the responsibility for ensuring ongoing access to that content. This shift — from the buying and selling of objects to the buying and selling of access rights — has had ramifications that have yet to be fully understood, and I’m not even sure that all of the ramifications have emerged yet.

Charlie Rapple: The move to distributed networks could be seen as one of the contributing factors to the healthy level of innovation in the scholarly publishing ecosystem. It has created a cultural and technological environment in which barriers to entry have been lowered and small organizations can thrive as connected nodes in (effectively) a distributed network. So start-ups can focus on building competence in a specialist function, without having to replicate related processes or data, because these can be bolted on from other providers — from those that are well-established (such as CrossRef) to other start-ups (e.g. partnerships between Kudos and TrendMD, or Altmetric and Mendeley). 

Another trend that could be considered a distributed network is the rise in citizen science, and projects like  Zooniverse which harness the power of hundreds of thousands of volunteers to analyze and annotate research data such as images of galaxies or videos of animals in their natural habitats. Such initiatives have their roots in distributed computing projects such as SETI, which encouraged people to use the idle power of their home computers to help in the search for extra-terrestrial intelligence. Zooniverse and other such “wisdom of crowds” projects show that, while in many contexts we no longer need to distribute computing power, our remaining need is for distributed people power. The impacts of this on scholarly publishing include the opening up of our value proposition to non-researchers — with Zooniverse’s “citizen scientists” often key to discoveries that are then published, there are implications for the language and formats in which research is communicated, the processes and business models by which it is made available, and the accessibility of all of these to people not steeped in scholarly publishing. 

Judy Luther: Distributed networks allow discovery to take place on a different platform than delivery. In contrast, aggregated collections such as JSTOR and Science Direct are sufficiently large to serve as destination sites where users find and access content subscribed by their institutions. The value of the latter is that the user has a seamless experience in connecting to the content. However, with Google as the dominant search engine, today the reality is that the academic user will link to content on a variety of platforms. This approach depends upon a reliable knowledge base and that can be a point of failure. Issues around this vulnerability have prompted Google to request access to the publishers’ subscription records and publishers are often reluctant to share this proprietary data with the information giant. Even with open access content the link between the point of discovery and delivery must be reliable.

One of the benefits of distributed networks is that they offer a single point of access where metrics can provide a more complete picture of usage data for authors, editors and publishers. Some large publishers and platforms provide an enhanced experience for users which is not possible to control on other platforms. Increasingly the variety of content including media and data requires more sophisticated support that cannot be replicated cost effectively. The opportunity to ‘publish once, view many times’ is one of the primary advantages of content on the Internet. Networks are designed for connections between users and content and leveraging that capability with necessary attention to the links offers broader access to more content.

David Crotty: One area where this change has had a direct impact is the growing question of the value of investing in big centralized repositories of information. We live in an increasingly distributed world and our search tools continue to improve their capabilities for ferreting out that information regardless of its location. So why should we bother to collect things in large and expensive baskets?

This question is playing out in the various responses seen to the US government’s policy on public access to research papers. The Department of Energy has chosen a forward-looking distributed approach with their PAGES service, which collects articles where necessary but focuses instead on metadata and providing pointers to content hosted elsewhere. Contrast this with the NIH’s PubMed Central, a centralized repository where all material is collected and stored in one place.

Is it still worth spending millions of dollars every year to keep a copy of everything in one place or is that an obsolete approach? The effectiveness (and particularly the cost-effectiveness) of these methodologies will be under great scrutiny over the coming years.

Ann Michael: As Phill’s response demonstrates, many employers are now able to leverage a much larger pool of job candidates. The geographic location of those candidates is often flexible because of increasingly effective tools and technologies for communication. What I find exceptionally interesting is that it was a bumpy ride at first, people and cultures (ingrained expectations) needed to “evolve” to take advantage of these tools. And while I am a firm believer that there is no substitute for face-to-face interaction, in many roles, there is an opportunity to balance in-person interaction with a well-performing virtual team. In our case our core team ranges from Boston to Florida to San Diego to Portland (with one Canadian!) and it works because of established practices, distributed networks, and an abundance of communication tools. God bless the Internet!

As the Chefs illustrate above, distributed networks have impacted our infrastructure, our work environments (remote or in-office), our ability to innovate, the discoverability of content, the data network that supports content, and the very nature of the product and services offered in our ecosystem (access versus physical goods). There is also pretty general agreement that in some places we’ve only just started to see the impact (products and services, as an example).

Now it’s your turn? How has the move to distributed networks impacted scholarly publishing? What have you seen? What do you expect to see?