News & Views: Can open access be made more affordable?

Dan Pollock and Ann Michael • August 22, 2023

This month we look at the challenges of providing global access to OA in an equitable way. Among the many issues discussed, a central one is cost. We look at how prices could be changed to match local affordability… and turn up some surprising results.


Background


Cost is key in the active discussions around the challenges to providing equitable access to OA. Modest fees for one country may be very expensive for another, particularly for the less wealthy regions outside the US and Europe.


cOAlition S, UNESCO, the International Science Council (ISC), the Open Access 2020 Initiative (OA2020), Electronic Information for Libraries (EIFL), the Association of African Universities, and Science Europe are organizing a series of workshops on global equity in Open Access publishing. OASPA has convened a series of workshops that examined equity in pricing models for all forms of OA business models – “to help dismantle the financial barriers authors face to participation [in OA]”. cOAlition S has commissioned a study to explore a “globally fair pricing system for academic publishing.”


One of many ideas being discussed is basing fees upon what is affordable locally, rather than pricing them at an identical level for customers irrespective of their geographic location. Precedents exist, such as the tiered pricing of vaccines.


Quantifying affordability


To work out pricing based on affordability, we need to account for how cheap or expensive things are between different countries. Exchange rates alone don’t account for local spending equivalence.


The Big Mac index1 illustrates the problem, as it allows us quickly to compare a similar item across many countries. If currency exchange rates reflected local purchasing power, then a $5.36 Big Mac in the US would cost the local currency equivalent of $5.36 everywhere. However, a Big Mac costs the local currency equivalent of, for example, $7.28 in Switzerland, or $4.44 in Brazil2.


Such Burgernomics provides an easily digestible example of the principle of quantifying differences in affordability between countries. In practice, economists use Purchasing Power Parity (PPP), which uses a complex sample of goods and services for its calculations.


How affordable are things around the world?


The map below compares the differences in purchasing power around the world.

Source: World Bank, Delta Think analysis. © 2023, Delta Think Inc., all rights reserved.


The map’s shading indicates the ratios of PPP in USD (purchasing power) to currency exchange rates for US dollars. They can be used to visualize the local affordability of prices based on currency exchange rates. The darker the area, the less affordable prices are due to weaker local purchasing power. There is arguably over-charging in these areas. Conversely, the lighter the shade, the more affordable prices are. (Note, the very lightest areas – such as Antarctica along the bottom – have no data.)

Unsurprisingly, high-income countries – such as the US – can better afford things compared with low-income countries – such as many in Africa. However, the middle ground is quite extensive. For example, it includes many countries in Europe, even though Europe is considered a wealthy area of the world.


Making prices affordable


If open access prices were to be set based on affordability, then what sort of discounts – or increases – might be needed to make them equitable?

Source: Delta Think market data, World Bank, Reasearch4Life, OpenAlex, Delta Think analysis. © 2023, Delta Think Inc., all rights reserved.


Figure 2 is a box plot showing how prices might need to change to reflect affordability based on local purchasing power (PPP). The horizontal line at 0% represents current state pricing. There is one dot per country. The dot’s distance from the 0% baseline represents how much prices in the country would need to change to reflect affordability based on PPP. The dots are not labelled to indicate country, as the point of the chart is to show the overall patterns.


To help put the comparison in context, we show different combinations of countries. These are the vertical groups of dots: N. America and Europe, Research4Life (R4L) eligibility (Group A/free and Group B/low-cost), and World Bank Income categories. Countries may appear in more than one group. The hollow boxes show the two middle quartiles for each group, with the horizontal line across the box showing the median change. For example:

  • The dots for the US require no change (as PPP is set relative to USD), and so will appear on the 0% line. Its dots appear in the leftmost group (N. America) and the High-income group, but not in the R4L groups, as it is not eligible for discounted prices.
  • Brazil would require 50% price discounts to reflect its PPP. Its dot is plotted at the -50% line in the Middle-income group (the only category it falls into in our chart).
  • Switzerland at 25% is the outlier in the High-income category.


What is striking in Figure 2 is the middle quartiles of almost every group of countries are entirely below 0. In other words, it is not only the case that lower income countries have lower purchasing power, requiring downward adjustments on prices for parity. It is the case that most countries have lower purchasing power than the very few (including many countries in the Global North, in Europe, or of High Income).


As a result, estimating the effects of the entire market moving to a solely PPP-based model, we see overall market value (aka market cost to those paying to participate) dropping by roughly 34%.


If we assume that a market value adjustment of the magnitude illustrated above would not be the end state, what type of pricing increases would be needed to offset pricing power parity? Assuming a case of full recovery of revenue (not likely, but it represents an interesting like-for-like comparison model), who would pay what?

Source: Delta Think market data, World Bank, Reasearch4Life, OpenAlex, Delta Think analysis. © 2023, Delta Think Inc., all rights reserved.


The bars in Figure 3 show how prices would need to change to both reflect PPP and maintain current total market value. This allows us to visualize what the current market might look like if it were priced according to affordability.


Countries with stronger PPP would subsidize weaker ones, but, again, the results turn up surprises. For example, the top quartile of countries in the Research4Life Group A (free access) would see price rises, while over half those in the High-income category, or over half of Europe would require price cuts.


For context, in our burgernomics examples: Prices would need to reduce by 50% in Brazil to reflect PPP alone, or by 25% to maintain total market value. Switzerland would see price increases of 10% (PPP only) or 66% (market adjusted). And the home of the Big Mac? 0% (for PPP only – PPP is in $USD), or an increase of 51% (market adjusted).


Conclusion


There are many issues surrounding equitability of OA publishing, but affordability remains the major one.

The APC barrier effect suggests that “APCs impede researchers with fewer resources in publishing their research as OA”. Transformative Agreements (TAs) and Read & Publish (R&P) deals, which may base their pricing on APCs, can bring similar problems of affordability to those of APCs themselves. The expense of subscriptions too, even for the wealthy, has been discussed at length, and their cost is one of the drivers behind advocacy of a move to OA. Affordability is an issue whatever the business model.


Waivers are the usual fix, but they can be problematic. Their implementation varies, and they may be perceived as patronizing or undermining the dignity of those receiving them (“Waivers are a charity; why can we not pay in our own way with our own money?”). Waivers are typically applied based on World Bank income categories, but, as our analysis of its data shows, these may not match affordability.


At first glance, exploring a PPP-based pricing model is attractive. It strikes at the heart of affordability, by accounting for participants’ ability to pay. However, as we have seen, it is not that simple. A move to PPP, in most cases, causes price increases for many (some of which are unexpected) to subsidize the others that need more affordable options. This may result in some controversial changes. That impact would be magnified if publishers attempted to adjust prices upwards overall to counteract market value shrinkage.


A PPP based pricing system, while attractive in principle, would need to be carefully implemented in practice. Prices or pricing tiers would need to account for more than the raw numbers. Optics would need to be carefully considered. There will be winners and losers. And, like William Gibson’s view of the future, they will be unevenly distributed.

For those interested in exploring this topic further, we have developed a tool to help organizations explore affordability-based scenarios. Please get in touch.


1The Big Mac index was invented by The Economist in 1986 as a “lighthearted guide to whether currencies are at their “correct” level. It is based on the theory of purchasing-power parity (PPP), the notion that in the long run exchange rates should move towards the rate that would equalise the prices of an identical basket of goods and services (in this case, a burger) in any two countries.” Further color was added in a 1994 article in the UK’s The Independent newspaper – Nick Wiseman, a statistician with the Economist, said: “We are often asked why we don't use the price of the Economist or of prostitutes instead. The former is printed in various places and the price is not uniform while the cost of the latter may depend on local custom.” The term burgernomics was coined by editor Pam Woodall at the same time the Big Mac index was first produced.

2The Economist Big Mac index, January 2023 figures, taken as a proxy for 2022 end of year.


This article is © 2023 Delta Think, Inc. It is published under a Creative Commons Attribution-NonCommercial 4.0 International License. Please do get in touch if you want to use it in other contexts – we’re usually pretty accommodating.


By Heather Staines April 20, 2026
We are proud to share a video recording of our March News & Views companion online discussion forum! Each year, this session brings the community together for a data-driven look at article processing charge trends, market dynamics, and what the latest data signals for publishers, societies, funders, and institutions. If APCs factor into your strategy, pricing, or planning for the year ahead, this webinar offers insights grounded in longitudinal data and practical analysis. If you missed the session, or if you attended and would like to watch/listen again, or share forward with friends, please feel free! 
By Lori Carlin & Meg White April 16, 2026
What happens when the value of your content is extracted, summarized, and delivered via an AI tool … and the audience stops there? As highlighted in The Scholarly Kitchen , this emerging dynamic, known as “the Crocodile Effect,” places AI systems between publishers and their audiences, consuming value while limiting downstream engagement. Recent data underscores the problem and urgency to address it. Publishers across industries – not just scholarly – are seeing dramatic declines in traffic, and AI-driven upstream interfaces, though growing, contribute only a fraction of overall referrals. At the same time, emerging “zero-click” behaviors on search engines, social media, and AI driven platforms are conditioning users to expect answers without leaving the platform. This dynamic is particularly acute in scholarly publishing: highly structured, fact-rich content is uniquely susceptible to being summarized, recombined, and delivered elsewhere. This creates a growing threat to how publishers capture and sustain value from their content through audience engagement. What’s at stake for scholarly publishers The implications go well beyond traffic metrics for scholarly publishers. The traditional model depends on a chain of value: discovery → access → engagement → monetization (via subscriptions, APCs, licensing, advertising, institutional relationships, etc.). AI summaries disrupt that chain at the very first step. When key findings, data points, or interpretations are surfaced directly in AI interfaces: Attribution becomes diluted , weakening brand recognition and authority. Engagement declines , reducing usage along with opportunities for deeper reading, citation, and reuse. Monetization pathways erode , particularly where metrics like cost per download (CPD), impressions, and click through rate (CTR) are used to determine value, collection development, and ad spend. Content integrity risks increase , as summaries may omit nuance, context, or limitations critical in scholarly work. In effect, publishers are providing the foundation for the knowledge economy without benefit, while AI platforms capture increasing portions of the user relationship. What can you do in response? Delta Think partners with publishers to address these issues, focusing on the following strategic areas related to usage and engagement: 1. Reassess what constitutes “value” in your content If core insights can be summarized and consumed elsewhere, publishers must emphasize elements that are harder to replicate. The question shifts from “How do we get clicks?” to “What experiences require coming to us?” 2. Strengthen direct relationships with audiences Invest or partner in channels where there is visibility. Developing strategies that involve your society networks, institutional integrations, and researcher and author workflows allow you to maintain direct engagement with readers and authors. 3. Optimize for visibility within AI ecosystems Conduct research to evaluate how your content is represented and implement recommendations to address problem areas. This includes assessment and development of metadata strategies, structured content, licensing approaches, and partnerships that ensure accurate attribution and appropriate use. 4. Explore new monetization and licensing models Identify the right approaches and determine the right deals for your content, while maintaining overall portfolio integrity. 5. Differentiate through utility and trust Lean into this by establishing your market positioning as an essential destination for validation. Establish your voice based on voice-of-the-customer market research methodologies to ensure your messaging establishes you as the authoritative source for verification, context, and deeper understanding. 6. Monitor and measure emerging referral dynamics Identify and implement new ways of tracking influence, reach, and downstream impact beyond clicks to demonstrate and quantify your value. What’s Next: Work with Delta Think to Turn the AI Threat into an AI Benefit Delta Think has expertise in all the areas outlined above. We ensure organizations develop actionable strategies to address current market changes and dynamics. The rise of AI-mediated discovery and zero-click experiences is an active and accelerating shift that requires evidence-based decision-making today. This is where Delta Think thrives. Our expert insights provide publishers with the data needed to understand where their exposure to AI-driven disintermediation is greatest, how usage patterns are evolving across channels, and which strategic responses are most likely to drive sustainable value. This includes identifying where traffic loss is most acute, where new forms of engagement are emerging, and how content, data, and licensing strategies must adapt in response. Delta Think can guide you in the development of a successful strategy that ensures the sustainability of your publishing program. Lori Carlin and Heather Staines will be attending the upcoming STM Annual Conference (April 22-23, Washington, DC) and SSP Annual Meeting (May 27-29, Chula Vista, CA), and Heather will be at the 2026 CSE Meeting (May 3-5, Durham, NC), so please reach out to set up a check in and continue the conversation. Not traveling this spring? We are always available at info@deltathink.com .
By Dan Pollock & Heather Staines April 7, 2026
This month we examine our latest data about Article Processing Charges (APCs). Per article pricing is a fundamental building block for all paid publishing models, so our review provides an invaluable insight into how the cost of open access continues to evolve. APC prices in general continue to increase, but at a slower rate compared with this time last year. Important nuances in the distribution of prices continue to affect the value and cost of paid publishing models. Background Each year we survey the list price Article Processing Charges of a sample of more than 40 scholarly publishers. Our dataset covers more than 20,000 titles dating back to 2016 and represents one of the most comprehensive reviews of open access pricing. To compare like for like, we consistently analyze non-discounted, CC BY APCs. We take a snapshot annually in January, so we can track yearly changes while controlling for publisher price changes throughout the calendar year. Our statistics here exclude zero or unspecified APCs, although these are present in our underlying data (and available to our subscribers). This allows us to capture trends where publishers choose to charge APCs without skewing averages. We run separate analyses around APC-free models. Headline Changes Going into 2026, we see APC prices increasing, but the percentage increases continue to fall back to track long-term trends. Fully OA APC list prices across our sample have risen by around 6.8% compared with 6.4% this time last year. Hybrid APC list prices have risen by an average of 5.3% compared with 3% this time last year. Maximum APCs for fully OA journals remain at $8,900. Maximum APCs for hybrid journals now top out at $12,850 (up $160 from last year). Average APC prices have increased more this year than last year. However, the increases remain lower than the highs of a couple of years ago. Underlying trends continue. Average APC price increases are getting larger each year. There are approximately 2.4x more hybrid journals than fully OA ones, down from 2.6x last year, and 2.9x a year before that. The proportion of journals that are hybrid is slowly falling. However, because they are the majority, hybrid journals follow (or, rather, set) a similar pattern to the market overall. On average, fully OA prices are around 67% of those of hybrids – consistent with long-term trends. Around 24% of our sample of fully OA journals charge no APCs, compared with 22% last year. (We have separately analyzed the number of articles in OA journals.) Price increases vary significantly by discipline. Fully OA Arts and Humanities journals saw larger than average increases; Multidisciplinary journals saw lower than average increases. Price Distribution Market-wide headline price changes mask important nuances. We have discussed previously that the most important nuance lies in the spread of prices within a given publisher’s portfolio. For example, if the bulk of a publisher’s journals lie towards the lower end of its APC pricing, with just a few journals priced at the higher end, the average (mean) price will be higher than most authors pay. The following figures show how the spread of APC prices plays out in the market across our sample of publishers. The figures are outlines of histograms, showing how many titles sit in various price bands over the successive years of data we have curated. The red line shows the most recent year’s prices. The lines become more green as they go further back in time. Subscribers to Delta Think’s Data & Analytics Tool can see full details of the Number of Titles and Price Band axes. Hybrid Prices The spread of price bands for hybrid journals is shown in Figure 1 below.
By Lori Carlin & Meg White March 25, 2026
In Spring 2025, approximately 13,000 researchers told Delta Think that they were bracing for disruption tied to potential U.S. funding cuts and policy changes. Now with the results of our Fall 2025 Global Author/Researcher Survey, we have a second data point and an early longitudinal view of what is changing, what is persisting, and what may be becoming structural. Delta Think partnered with 40 scholarly organizations across both surveys, collecting more than 25,000 responses from researchers across disciplines, career stages, and 125 countries. This scale allows us to move beyond a snapshot of sentiment and begin identifying sustained patterns in how researchers are responding. The findings are straightforward: while the initial shock has eased, underlying pressures remain. From Initial Reaction to Sustained Constraint Compared to earlier in 2025, researchers report a modest softening in how they perceive the impact of funding uncertainty. While there are variations across the disciplines, the sentiment is more measured and overall concerns remain high. Across both surveys, the same concerns persist, and, critically, researchers are beginning to adapt their behavior in response. The joint findings point to a system that is not rebounding; it’s recalibrating under sustained pressure: Funding concerns remain deeply embedded. Researchers across both waves continue to highlight funding stability and long-term research viability as primary concerns, suggesting these are perceived as ongoing constraints rather than short-term disruptions.  Research capacity is being reallocated. Researchers report shifting time and effort toward securing funding, often at the expense of publishing and peer review. This signals pressure on both research output and the systems that support it. Global engagement is stabilizing at a lower baseline. Some of the sharper reactions seen in Spring 2025, particularly internationally, have moderated. However, researchers continue to reassess publishing, collaboration, and conference participation decisions, with financial and geopolitical considerations still shaping behavior. What This Means for the Research Ecosystem Taken together, these patterns suggest the research community is not in acute crisis, but it is not returning to prior norms either. Instead, we see early evidence of a more constrained operating environment taking hold, one where funding uncertainty continues to influence attitudes as well as day-to-day decisions about publishing, participation, and collaboration. For publishers, societies, and research organizations, this distinction matters. Temporary disruption can be managed tactically. Sustained constraint requires strategic adjustment across pricing, portfolio strategy, engagement models, and advocacy. What’s Next: Evidence for Strategic Decision-Making Delta Think focuses on turning evidence into strategy. Our work is designed to help organizations ground decisions in real market data and signals, supporting informed planning in rapidly evolving environments. Our Fall 2025 survey represents the second phase of an ongoing annual research initiative. By continuing to track these dynamics over time, we aim to provide Scholarly Communications with the evidence needed to understand where there are shifts, where change is accelerating or stabilizing, what patterns are beginning to emerge, and what changes are likely to persist. The full findings from our surveys, including deeper analysis, segmentation, and exploration of the trends are available to participating organizations and through access to our full report. If you’d like to learn more, see the complete results, or participate in our next survey, please reach out at: info@deltathink.com .
By Lori Carlin & Meg White February 26, 2026
Trust is what allows research to function. It enables collaboration, supports editorial decision-making, and underpins the credibility of the scholarly record. Today, that trust is increasingly being tested. Competitive pressures, new forms of manipulation, and rapidly evolving technologies are raising both the volume and complexity of integrity risks. And our community is responding with clearer standards, better training, smarter workflows, and responsible innovation, strengthening the systems that protect confidence in science and scholarly publishing.  The STM Research Integrity report makes clear that the community is fully engaged. Publishers have invested heavily in dedicated teams, screening technologies, and workflow integration, and are focused on proactive prevention. However, the report does highlight a persistent challenge: expectations around research integrity are rising faster than many organizations’ ability to define, implement, and operationalize them consistently The gap between expectation and execution is where many publishers and societies are now focused. Defining What “Good” Research Integrity Practice Looks Like One of the report’s central insights is the diversity of approaches publishers have taken to building research integrity capacity. Team size, tool adoption, workflow design, and policy scope vary widely—often for good reasons related to scale, discipline, and business model. However, this diversity also makes it difficult for organizations to answer basic questions internally: What does “good” look like for us? Which capabilities are essential now, and which can follow later? How do we know whether our current approach is proportionate to the risks we face? The STM report shows that effective integrity practice is about ensuring that policies, processes, and systems are coherent and fit for purpose. Translating this into action requires clear frameworks that help organizations define integrity expectations in ways that are realistic and aligned with their publishing context. Turning Policy into Day-to-Day Practice Integrity infrastructure only works if it is deployed consistently across the publication lifecycle. Clear policies must be supported by screening checkpoints, escalation pathways, investigation protocols, and well-defined roles for editors, integrity teams, and external partners. In practice, many organizations struggle at this stage. Policies may exist on paper but are unevenly applied. Screening tools may generate signals without clear guidance on interpretation. Editors may be unsure when and how to escalate concerns. The report illustrates how publishers who have made the greatest progress have focused on integration—embedding integrity checks into submission, peer review, revision, and pre-acceptance workflows, and ensuring that staff and editors understand how these pieces fit together. Achieving this level of operational clarity requires deliberate design. Investing in Technology Without Losing Human Judgement Technology plays a central role as an enabler rather than a solution. Tools surface signals; people make decisions. Managing false positives, avoiding workflow bottlenecks, and maintaining editorial confidence remain ongoing challenges. For publishers and societies, the practical questions are how to select, combine, and govern tools so that they best support existing processes. The report underscores a foundational tenet of Delta Think’s consultancy: evidence-based decision-making is paramount in understanding what tools will deliver in practice, what processes will best interact with workflows, and where additional human expertise is required. From Expectation to Implementation Research integrity is an operational capability that publishers and societies are defining, building, and most importantly, need to continuously refine. Research Integrity ‘success’ will depend on a combination of tools, services, processes, and training consistently refined and applied. This is where Delta Think’s focused, evidence-led approach can make a tangible difference. We work with publishers and societies to interpret sector expectations, assess current vs. best-in-class capabilities, and design innovative roadmaps. Reach out today to discuss how we can partner to ensure your research integrity practices and processes are performing at peak efficiency and effectiveness.
By Dan Pollock & Heather Staines February 10, 2026
This edition of News & Views looks at the changing patterns of license use over time. Are licenses becoming more or less permissive and what are the implications for scholarly publishers? Introduction Last month we compared the patterns of license use as reported by the members of the Open Access Scholarly Publishers Association (OASPA) with those observed in the wider scholarly journals market. Our comparison looked at the aggregated total numbers of licenses during the years 2015-2024. This showed a useful snapshot of the complete 10-year period spanned by the data. But how has the use of license types changed over that time? This month we dive into the temporal changes, focusing on the core scholarly journals market based on data in our Data and Analytics Tool (DAT). DAT allows for multiple comparisons and in-depth analysis, and, in this edition of News & Views, we highlight a couple of interesting examples of trends over time. The different types of OA licenses We start by focusing on only Open Access (OA) journal output. Many funders and institutions mandating OA also insist on certain OA license types, typically more permissive CC0 or CC BY licenses (to be consistent with the foundational Budapest Open Access Initiative ). However, more restricted licenses, such as those prohibiting commercial or derivative use, are also broadly used. For the purposes of our analysis, we define these as follows. “Permissive” refers to articles published under CC0 or CC BY licenses. These are the ones defined as required by major OA advocates, such as Plan S , Wellcome , HHMI , etc. “Restricted” refers to articles published under other licenses that allow limited reuse, such as CC BY-NC (non-commercial), CC BY-ND (no derivatives), or publisher-specific licenses. Although not conforming to the strictest OA mandates, such licenses are widely used and are consistent with many mandated OA requirements. Publishers sometimes charge lower APCs for these more restrictive licenses compared with their permissive counterparts. Data comparing the use of permissive vs. restricted licenses in open access output is shown below.
By Lori Carlin & Bonnie Gruber January 29, 2026
Building on last Spring’s survey of authors and researchers, we are once again analyzing responses from a large, global community to understand how shifts in the funding and policy environment are affecting research activity, priorities, and outlook. Conducted in partnership with 32 organizations, the Second edition of our Author–Researcher Survey was designed explicitly as a continuation of the work conducted in Spring 2025, allowing us to again take the pulse of authors-researchers, track emerging trends, and identify early signals related to real and perceived changes in U.S. science policy and research funding. With 12,122 completed responses from researchers in 125 countries , the Second survey again provides a robust and diverse dataset. Analysis is ongoing and the high-level structure of the respondent pool is already clear, closely mirroring, while subtly extending, what we observed in the Spring of 2025. A Global Community, with the U.S. at the Center of the Conversation The most recent respondent pool again reflects a truly global research community. Just over half of respondents are based in the United States, with others reporting from a broad range of countries worldwide. This near-even U.S./international split remains one of the defining features of the dataset and is particularly important given the survey’s focus on U.S. policy and funding dynamics. The results continue to underscore that changes originating in the U.S. research system are global in scope, closely watched and widely felt well beyond national borders. Science-Heavy Participation Anchored in Physical, Life, and Health Research Physical sciences represent the largest single area of engagement, alongside strong representation from the life sciences, health sciences, and engineering and technology. Social sciences and the arts and humanities account for a smaller share of responses, and as in prior responses, many participants report working across multiple fields. This pattern reflects both the interdisciplinary reality of modern research and the continuity needed to support meaningful year-over-year analysis. Insights Shaped Largely by Mid- and Senior-Career Researchers Mid- and senior-career respondents make up the majority of the sample, complemented by a substantial cohort of early-career researchers and representation from graduate and doctoral trainees. This reinforces that much of the insight emerging from the survey reflects the perspectives of researchers with long-term experience navigating funding cycles, institutional change, and strategic research planning. That experience is also evident in respondents’ professional roles. Faculty members and principal investigators account for the largest share of participants, alongside researchers, analysts, postdoctoral fellows, and graduate students. Clinically active professionals—including physicians and other healthcare providers—are also represented. The overall role mix remains highly consistent as compared to the Spring group, strengthening confidence that shifts observed in attitudes or behavior are not driven by changes in who is responding. Why This Continuity Matters One of the most important features of this current dataset is how closely its underlying demographic structure aligns with the Spring survey results. This consistency strengthens our ability to interpret changes in sentiment, expectations, and reported actions as genuine signals rather than artifacts of sampling. The scale and international reach of the most recent responses allow us to surface new nuances, particularly around how researchers are adapting to evolving policy signals, funding uncertainty, and institutional responses. What Comes Next We are digging into the full results to explore how researchers’ outlooks have evolved, including: Whether perceptions of funding stability and risk are shifting How researchers are adjusting research scope, timelines, or collaboration strategies Persistent signals related to mobility, field-level vulnerability, and longer-term confidence in the research enterprise Decisions about research funding, policy, and scholarly communication increasingly require evidence, not assumptions. Delta Think’s research process is designed to provide the scholarly communication community with the rigor, scale, and transparency needed to build sustainable strategies in an uncertain environment. From survey design through analysis and reporting, our approach emphasizes methodological consistency, careful segmentation, and openness about what the data can support. By maintaining continuity year over year, we aim to surface credible trendlines that stakeholders across the research ecosystem can trust. The Delta Think team designed this initiative to gather data and to support our partners across the scholarly ecosystem. By combining rigorous research design with deep industry context, we help publishers, societies, and institutions make informed, strategic decisions during periods of significant change. If you’re interested in learning more about the findings or discussing how they apply to your organization, we’d welcome the conversation. Please email Lori Carlin to get started.
By Dan Pollock & Heather Staines January 13, 2026
Overview This month we look at the changing mix of licenses in use among OASPA members and what these trends reveal for open access publishing more broadly. Introduction Each year OASPA surveys its member organizations to gather information about the volumes of output they publish in their fully OA and hybrid journals. These data provide a useful lens on how the most OA-committed publishers are approaching licensing and how that compares with the market as a whole. We’re delighted to be working with OASPA on its survey again this year. We process the raw data into consistent categories, normalize publisher names, and create visualizations of the data over time. We also produce a yearly blog post in cooperation with OASPA, outlining some of their results. Because space constraints limit what can be covered in OASPA’s own post, we explore additional angles here, placing OASPA member behavior in the context of Delta Think’s wider, market-level analysis. Subscribers to our Data and Analytics Tool can investigate the data further still. Our work with OASPA provides a complementary view into our market-wide analysis. Use of Licenses We can examine which common open access licenses are in use, as follows. 
By Lori Carlin December 4, 2025
Impelsys and Delta Think Join Forces to Expand Strategy and Technology Capabilities for Publishing, Scholarly Communications, Education, and Healthcare Communities
By Dan Pollock and Heather Staines December 2, 2025
Overview Each year, our scholarly market sizing update and analysis goes way beyond open access headlines. One consistent finding is that market share of open and subscription access is highly dependent on subject area. This month we look at how to best use our Delta Think Data and Analytics Tool (DAT) to understand and analyze these variations. With coverage of approximately 220 detailed subject areas, the data shows that headlines can sometimes mask important detail. Background Since we began our scholarly journal market analyses in 2017, one of our core objectives has been to enable deep analysis of our headline findings. Our annual market share updates represent a summing of data – more than 200 detailed subject areas, 200 or so countries, also split by society vs. non-society journal ownership. This level of detail is clearly too much for our monthly short-form analyses, so we present the market-wide headlines in our annual updates. However, by picking one subject area as an example, we can see how much nuance lies beneath the surface, and why these variations matter. Subscribers to DAT can use our interactive tools to quickly and easily see each level of detail and filter for just those relevant to their organization. Market Share Variation by Subject Area Our latest market headlines suggested that open access (OA) accounted for just under 50% of article output in 2024. However, this headline proportion varies considerably by subject area.