Showing posts with label CCC. Show all posts
Showing posts with label CCC. Show all posts
Wednesday, October 24, 2012
In RE Books: Conference on law and the future of Books
I'll be going to this on Friday and Saturday. It may already be full but looks interesting from New York Law School. More details HERE:
Wednesday, August 15, 2012
Will a (B)Million Dollar Question and Lots of Money Spell the End for Collecting Agencies?
Some say that New Jersey is the most politicized state in the nation because the ratio of elected officials to state citizens is one of the highest in the land. Rest assured this post isn’t a bash about participatory democracy but, in thinking about this. I realized two things. Firstly, there are many elected officials whose job and function I am convinced the electorate knows nothing about (e.g., Freeholder, Sheriff, Advocate), and, secondly, each of these officials has a desk, chair and bookshelf lodged in an office park somewhere as well as a gas allowance and a Staples credit account. The embedded expense drain alone should cause all citizens to question why we need so much “representation”. Evolution has yet to come to New Jersey politics but it will, as technology begins to light up the obscurity.
Technology has helped break down barriers in many industries and its creeping impact on the media industry is inexorable. I was reading recently of proposals to improve the way European collecting agencies operate and was struck by an odd similarity to New Jersey politics: An ignorant proletariat saddled with an overhead-laden bureaucracy. The way royalty collecting agencies operate in a connected world may mean they will be the next media ‘industry’ to face disruption: New technology and consequent changes in the way content owner’s license content should eliminate the need for duplicative collecting agencies around the world. The signs are already there.
Royalty collection reform legislation submitted to the European Commission presents a picture of operational obscurity, questionable business practices and general mismanagement which appear to have long plagued the relationship between content owners and the local agencies charged with protecting their interests. Musicians (especially) and other content owners should be looking forward to a future in which technology will enable a more transparent system of policing and collection, which will lead to fairer compensation (to them). Regrettably, the proposals put forth in this reform legislation don’t point directly to that future; rather, they serve only to embed the incumbent collecting societies, requiring them simply to adopt some new policies, procedures and standards.
If it were up to Pink Floyd (which one’s pink?) and RadioHead, the whole lot would go in a liberalization of the entire marketplace. In their view, protectionism is precluding better accountability and a more expansive commercial market.
According to the artists and the commission’s report, collection societies -- up to 250 of which operate in Europe -- keep "substantial amounts of money" on their books pending distribution or varying time schedules, to the detriment of the artists they are tasked with supporting. In their letter to the commission, the musicians were blunt as to the conclusions suggested by the report (TheVerge),
Even with the new improvements, there’s a suggestion that some of these societies don’t have the operational capacity or technology wherewithal to accommodate the significant changes in the market place that are already present. They haven't invested appropriately in new systems to an extent that they are already struggling to cope. That reality, and a pervading notion that the agencies have never been able to collect all royalties due artists, must really rankle with the artists playing close attention to this issue. (Some of the agencies see things differently and have, by their own account, been investing in their operations WSJ).
To the latter point, the US recently saw the launch of a new company (TuneSat) that promises to revolutionize the collection of music and performance royalties and, in the process, collect a far greater percentage of collectable royalties for artists and content owners. Profiled in the WSJ, one of the founders of TuneSat notes that many collecting societies operate the same way they have for the last 75 or 100 years. As with many good ideas, TuneSat was founded in response to a specific problem and out of frustration with the way things operate; and the founders chose to solve that problem in a completely different way. TuneSat monitors digital television signals to capture the ‘plays’ of digital content as described in the WSJ article:
While the founders of TuneSat believe that the existing agencies are undercounting royalties earned (and data collected by TuneSat seems to bear this out), artists also believe that collecting agencies are impeding the growth and development of markets. The collecting agencies may not be very good at collection and might also be limiting the exploitation of the rights they manage. Rights holders have pointed to the lower penetration levels of digital content in Europe as evidence of the impediments collecting agencies place on markets. Pink Floyd and RadioHead blame the fiefdom-like structure of the ‘market,’ which gave rise to more than 250 collecting agencies in continental Europe, for suppressing the establishment of new businesses in the pan-European market.
It would seem that the European collecting agency market is ripe for disruption and, with the significant amounts of money at stake, it’s only a matter to time before that happens. The European government must realize that interests of the many trump the interests of the few: A more open market will ultimately benefit consumers by encouraging the provision of new services and products, while giving artists and content owners more options for managing their interests. It’s just not clear whether the European commission aspires to either of these ends?
Million Dollar Question
Money
Technology has helped break down barriers in many industries and its creeping impact on the media industry is inexorable. I was reading recently of proposals to improve the way European collecting agencies operate and was struck by an odd similarity to New Jersey politics: An ignorant proletariat saddled with an overhead-laden bureaucracy. The way royalty collecting agencies operate in a connected world may mean they will be the next media ‘industry’ to face disruption: New technology and consequent changes in the way content owner’s license content should eliminate the need for duplicative collecting agencies around the world. The signs are already there.
Royalty collection reform legislation submitted to the European Commission presents a picture of operational obscurity, questionable business practices and general mismanagement which appear to have long plagued the relationship between content owners and the local agencies charged with protecting their interests. Musicians (especially) and other content owners should be looking forward to a future in which technology will enable a more transparent system of policing and collection, which will lead to fairer compensation (to them). Regrettably, the proposals put forth in this reform legislation don’t point directly to that future; rather, they serve only to embed the incumbent collecting societies, requiring them simply to adopt some new policies, procedures and standards.
If it were up to Pink Floyd (which one’s pink?) and RadioHead, the whole lot would go in a liberalization of the entire marketplace. In their view, protectionism is precluding better accountability and a more expansive commercial market.
"We are deeply disappointed by your choice to defend the interests of a minority of managers and stakeholders," said a letter signed by Pink Floyd's Nick Mason, Radiohead's Ed O'Brien, British singer Sandie Shaw, producer CJ Bolland and the director of Younison (an artists' lobby) Kelvin Smits.
According to the artists and the commission’s report, collection societies -- up to 250 of which operate in Europe -- keep "substantial amounts of money" on their books pending distribution or varying time schedules, to the detriment of the artists they are tasked with supporting. In their letter to the commission, the musicians were blunt as to the conclusions suggested by the report (TheVerge),
"You thus legitimise one of the most problematic forms of embezzlement adopted by some collecting societies in Europe," their letter reads.According to research completed by the commission, in 2010 major societies owed 3.6 billion euros ($4.41 billion) in (undistributed) royalties to the creators. That’s some serious cash money.
Even with the new improvements, there’s a suggestion that some of these societies don’t have the operational capacity or technology wherewithal to accommodate the significant changes in the market place that are already present. They haven't invested appropriately in new systems to an extent that they are already struggling to cope. That reality, and a pervading notion that the agencies have never been able to collect all royalties due artists, must really rankle with the artists playing close attention to this issue. (Some of the agencies see things differently and have, by their own account, been investing in their operations WSJ).
To the latter point, the US recently saw the launch of a new company (TuneSat) that promises to revolutionize the collection of music and performance royalties and, in the process, collect a far greater percentage of collectable royalties for artists and content owners. Profiled in the WSJ, one of the founders of TuneSat notes that many collecting societies operate the same way they have for the last 75 or 100 years. As with many good ideas, TuneSat was founded in response to a specific problem and out of frustration with the way things operate; and the founders chose to solve that problem in a completely different way. TuneSat monitors digital television signals to capture the ‘plays’ of digital content as described in the WSJ article:
TuneSat… uses digital technology to monitor satellite TV signals from around the world and keep track of how music is being used in theme songs, advertisements, background soundtracks and other broadcast situations. Schreer is CEO and Woods is COO of the company.
Beyond that, they say TuneSat may help disrupt the performing rights business, an industry with $2 billion in revenue in U.S. and $9 billion worldwide, by putting powerful algorithms directly in the hands of copyright owners that allow them to scour and analyze the use of their work across the entire national TV market. A web-based application allows subscribers to access TuneSat’s servers and its proprietary analytic tools, in the process allowing them to bypass traditional royalty rights organizations, if they choose.TuneSat has the opportunity to be a real disruptor in the content business and it would seem unlikely that many of those 250 local (European) operators would be able to withstand a challenge from companies like TuneSat. And with a $9Billion market opportunity, there are likely to be more competitors emerging.
While the founders of TuneSat believe that the existing agencies are undercounting royalties earned (and data collected by TuneSat seems to bear this out), artists also believe that collecting agencies are impeding the growth and development of markets. The collecting agencies may not be very good at collection and might also be limiting the exploitation of the rights they manage. Rights holders have pointed to the lower penetration levels of digital content in Europe as evidence of the impediments collecting agencies place on markets. Pink Floyd and RadioHead blame the fiefdom-like structure of the ‘market,’ which gave rise to more than 250 collecting agencies in continental Europe, for suppressing the establishment of new businesses in the pan-European market.
It would seem that the European collecting agency market is ripe for disruption and, with the significant amounts of money at stake, it’s only a matter to time before that happens. The European government must realize that interests of the many trump the interests of the few: A more open market will ultimately benefit consumers by encouraging the provision of new services and products, while giving artists and content owners more options for managing their interests. It’s just not clear whether the European commission aspires to either of these ends?
Million Dollar Question
Money
Wednesday, July 25, 2012
Fair Dealing for Copyright in Canada
Copyright issues in Canada may not be top of most people's list of interesting news items but Canada may be on the cusp of legislating new copyright reforms and the reason may be a recent set of rulings from their supreme court. The courts ruling covered several media formats and distribution methods and could generally be construed as a win for consumers. That would be 'free-loaders' if you were of a certain group that saw the rulings as representing a way from consumers to make broader use of content without paying for it. The details of the rulings make this conclusion less clear and the result maybe that the Canadian legislature may enact a new set of copyright rules by the end of the year.
The rulings covered music and educational materials (content) and centered on the issue of 'fair dealing' which equates to the US fair use doctrine and similarly requires a review of specific criteria to determine whether a use can be considered 'fair dealing' and thus is permitted. From the ruling Judge Abella sets out this criteria: (Corrected from Rothstein)
… the concept of fair dealing allows users to engage in some activities that might otherwise amount to copyright infringement. The test for fair dealing was articulated in CCH as involving two steps. The first is to determine whether the dealing is for the allowable purpose of “research or private study” under s. 29, “criticism or review” under s. 29.1, or “news reporting” under s. 29.2 of the Copyright Act. The second step of CCH assesses whether the dealing is “fair”. The onus is on the person invoking “fair dealing” to satisfy all aspects of the test. To assist in determining whether the dealing is “fair”, this Court set out a number of fairness factors: the purpose, character, and amount of the dealing; the existence of any alternatives to the dealing; the nature of the work; and the effect of the dealing on the work.
In reviewing each of these fairness hurdles Abella offered several zingers and reading between the lines it doesn't seem there was much sympathy for the argument education publishers presented. For example, in reviewing the 'purpose' factor he dismissed the reasoning that “private study”
should not be understood as requiring users to view copyrighted works in
splendid isolation (my italics) and that focusing on the 'geography' of teaching artificially separated the teacher from the studying students. At issue is whether teachers can be separated from the students with respect to the use of the content and Judge Abella politely shoots this down saying that,
with respect, was a flawed approach. First, unlike the single patron in CCH, teachers do not make multiple copies of the class set for their own use, they make them for the use of the students. Moreover, as discussed in the companion case SOCAN v. Bell, the “amount” factor is not a quantitative assessment based on aggregate use, it is an examination of the proportion between the excerpted copy and the entire work, not the overall quantity of what is disseminated.
Interesting that the Judge is suggesting it doesn't really matter how big the class is but rather the amount of material taken from the entire work which equates to the US concept of fair use.
In the press, the reaction to the set of decisions oscillated between 'free content' and 'the end of publishing'. A point counter point ran in the Canadian Financial Post. De Beer suggests that the ruling is not an assault on copyright (as his fellow FP columnist Corcoran comments) but rather an opportunity for innovation in education:
The education case that Financial Post editor Terence Corcoran calls an assault on copyright will drive innovation in classrooms across the country by providing necessary breathing room for teachers and students to deal fairly with copyright-protected materials. Schools will probably continue a trend that predates this decision by shifting away from collective blanket licences. But, where copying goes beyond validated fair dealings, institutions will instead choose market-oriented solutions like custom database subscriptions and direct licences on various terms from authors or publishers.
In my (biased) view, I thought the Judge's comments with respect to the 'alternatives to the dealing' argument presented by the publishers to be most interesting.
I also have difficulty with how the Board approached the “alternatives to the dealing” factor. A dealing may be found to be less fair if there is a non-copyrighted equivalent of the work that could have been used, or if the dealing was not reasonably necessary to achieve the ultimate purpose (CCH, at para. 57). The Board found that, while students were not expected to use only works in the public domain, the educational institutions had an alternative to photocopying textbooks: they could simply buy the original texts to distribute to each student or to place in the library for consultation
He goes on to suggest that buying books for the entire class when only a portion is needed is is not realistic,
Under the Board’s approach, schools would be required to buy sufficient copies for every student of every text, magazine and newspaper in Access Copyright’s repertoire that is relied on by a teacher. This is a demonstrably unrealistic outcome.
Here there may be some similarity to the recent Georgia case in the US where Judge Evans plainly stated that if a publisher's chapter is readily
and easily available and the permission is set at a "reasonable price"
then the law comes down on the publisher's side. Abella does not go this far; however, there's some logic in taking his argument down that path. This may be consoling to Canadian rights holders if they are able to easily deliver the 'except' in question rather than the entire book. On other words, if the precise excerpt was available and reasonably priced to the student could Abella's argument be as strong?
Lastly, Abella thought publishers argument regarding financial harm caused by teachers' photocopying spurious and pointed to many other macro issues impacting publishers fortunes such as, "the adoption of semester teaching, a
decrease in registrations, the longer lifespan of textbooks, increased use of
the Internet and other electronic tools, and more resource-based learning."
In his concluding comments, de Beer suggests that this ruling may undercut copyright agency's (such as Access Copyright) desire to license use on a universal basis. Similar arguments have been made by others to the extent that blanket agreements will be less viable options for most institutions and many education institutions will establish direct agreements with select publishers and for others will seek permission on an needed basis. This point coincides with a substantial increase in the per head rate that Access Copyright rolled out to education institutions for universal access late last year the size of which was 'heavily debated' and will come as welcome news to many Provosts.
All interesting developments, but the most interesting outcome may concern the government's effort to reform Canadian copyright. Given these rulings (not all covered here), content owners may be motivated to pressure the legislature to set rules more in their favor but that remains to be seen.
Monday, June 04, 2012
Georgia State Update from Inside Higher Ed
ISHEd reviews the latest information pertaining to the Georgia State eReserves case (ISHEd)
Strictly speaking, Cambridge v. Patton only pertains to e-reserve practices at Georgia State. However, given the ambiguity over the boundaries of educational fair use with respect to academic libraries, many observers expect that the resolution of the case will cause ripple effects in e-reserve policies across the country.
As with the last proposed settlement by the publishers, this new proposal is likely to ruffle feathers among academic librarians.
"The proposed order is clearly intended to humiliate [Georgia State] and to make fair use as difficult as possible for them," wrote Kevin Smith, scholarly communications officer at Duke University, on his blog. "It reads to me like a party who actually won very little at the trial still trying to spike the ball in the other parties’ face."
Background from an earlier PND post - here.
Wednesday, May 16, 2012
MediaWeek (Vol 5, N 20): Georgia State Opinion Round-Up
For those interested in how discussions are setting up around the Georgia State eReserves Case:
Kevin Smith at Duke (perhaps the first to write in detail about the opinion):
Chronicle of Higher Ed: What's at Stake in the Georgia Case (2011):
Kevin Smith at Duke (perhaps the first to write in detail about the opinion):
Overall there is good news for libraries in the decision issued late yesterday in the Georgia State University e-reserves copyright case. Most of the extreme positions advocated by the plaintiff publishers were rejected, and Judge Evans found copyright infringement in only five excerpts from among the 99 specific reading that had been challenged in the case.James Grimmelmann: Inside the Georgia State Opinion
That means she found fair use, or, occasionally, some other justification, in 94 instances, or 95% of the time.
But that does not make this an easy decision for libraries to deal with. Indeed, it poses a difficult challenge for everyone involved, it seems. For the Judge, it was a monumental labor that took almost a year to complete. She wrote 350 pages, working through a raft of legal arguments first and then painstakingly applying them to each of the challenged readings. And for me, with a week’s vacation pending, I am trying to make sense of this tome before I leave, which is why I am writing this at four in the morning on a Saturday (please excuse typos!).
Thus, the operational bottom line for universities is that it’s likely to be fair use to assign less than 10% of a book, to assign larger portions of a book that is not available for digital licensing, or to assign larger portions of a book that is available for digital licensing but doesn’t make significant revenues through licensing. This third prong is almost never going to be something that professors or librarians can evaluate, so in practice, I expect to see fair-use e-reserves codes that treat under 10% as presumptively okay, and amounts over 10% but less than some ill-defined maximum as presumptively okay if it has been confirmed that a license to make digital copies of excerpts from the book is not available.ARL: GSU Fair Use Decision Recap and Implications (PDF) Hat tip Brantley
The most interesting issue open in the case is the scope of any possible injunction. Given that Georgia State won on sixty-nine out of seventy-four litigated claims, while the publishers won on only five, I expect that the any injunction will need to be rather narrow. But given how amenable the court’s proposed limits are to bright-line treatment, it is likely that the publishers will push to write them in to the injunction.
My bottom line on the case is that it’s mostly a win for Georgia State and mostly a loss for the publishers. The big winner is CCC. It gains leverage against universities for coursepack and e-reserve copying with a bright-line rule, and it gains leverage against publishers who will be under much more pressure to participate in its full panoply of licenses.
In addition to the statutory factors, courts are required to consider how aIn Some Leeway, Some Limits over at Inside Higher Ed:
proposed fair use serves or disserves the purpose of copyright, which is to
encourage the creation and dissemination of creative works. The judge’s
reasoning here is perhaps the most compelling and shows that she took into
account some key facts about the academic publishing market that are often
overlooked in these discussions. Based on testimony from GSU professors, the
judge finds that academic authors and editors are motivated by professional
reputation and achievement and the advancement of knowledge, not royalty
payments, and that any diminution in royalty payments due to unlicensed
course reserves would have no effect on their motivation to produce
scholarship.8 Indeed, because the authors of such works are also the primary
users of course reserve systems, they would experience a net benefit from fair
use in that context. The court emphasizes that publishers receive so little income
from licensing excerpts as a percentage of their overall business that the slight
diminution caused by allowing unlicensed posting to course reserves would
have no cognizable effect on their will or ability to publish new works.
Unfortunately, these additional considerations do not enter into the individual
determinations. Rather, the court finds that any uses that stay within her
framework will serve the purposes of copyright, and those that stray beyond it
will disserve them.
While the legal analysis may take time, both publishers and academic librarians have reacted strongly throughout the case. Publishers argued hat their system of promoting scholarship can't lose copyright benefits. Judge Evans in her decision noted that most book (and permission) sales for student use are by large for-profit companies, not by nonprofit university presses. But the Association of American University Presses has backed the suit by Cambridge and Oxford, saying that university presses "depend upon the income due them to continue to publish the specialized scholarly books required to educate students and to advance university research."
Many librarians, meanwhile, have expressed shock that university presses would sue a university for using their works for teaching purposes. Barbara Fister, a librarian at Gustavus Adolphus College and an Inside Higher Ed blogger, tweeted Friday night: "It still boggles my mind that scholarly presses are suing scholars teaching works that were written to further knowledge."
The reserve readings at the crux of the dispute are chapters, essays or portions of books that are assigned by Georgia State professors to their undergraduate and graduate students. (While the readers are frequently referred to as "supplemental," they are generally required; "supplemental" refers to readings supplementing texts that the professors tell students to buy.) E-reserves are similar to the way an earlier generation of students might have gone to the library for print materials on reserve. The decision in this case notes a number of steps taken by Georgia State (such as password protection) to prevent students from simply distributing the electronic passages to others.
"My initial reaction is, honestly, what a crushing defeat for the publishers," said Brandon C. Butler, the director of public-policy initiatives for the Association of Research Libraries. Given how few claims the publishers won, "there's a 95 percent success rate for the GSU fair-use policy." The ruling suggests that Georgia State is "getting it almost entirely right" with its current copyright policy, he said.
The three publishers brought their suit in April 2008. The Association of American Publishers and the Copyright Clearance Center, which licenses content to universities on behalf of publishers, helped foot the bill.
In their complaint, the plaintiffs alleged that Georgia State went well beyond fair use in how much copyrighted material it allowed faculty members to post online for students. The university denied the claim and overhauled its e-reserves policy in late 2008, after the lawsuit was brought. As a state institution, it also invoked sovereign immunity, which meant that the publishers would have a harder time seeking damages.
Publisher's Weekly: AAP Statement on the Opinion
Ars Technica: Fair Use is HardAt the same time, we are disappointed with aspects of the Court's decision. Most importantly, the court failed to examine the copying activities at GSU in their full context. Many faculty members have provided students with electronic anthologies of copyrighted course materials which are not different in kind from copyrighted print materials.In addition, the court's analysis of fair use principles was legally incorrect in some places and its application of those principles mistaken. As a result, instances of infringing activity were incorrectly held to constitute fair use. Publishers recognize that certain academic uses of copyrighted materials are fair use that should not require permission but we believe the court misapplied that doctrine in certain situations.The Court’s ruling has important implications for the ongoing vitality of academic publishing as well as the educational mission of colleges and universities. Contrary to the findings of the Court, if institutions such as GSU are allowed to offer substantial amounts of copyrighted content for free, publishers cannot sustain the creation of works of scholarship. The resources available to educators will be fundamentally impaired.
So—crushing victory for Georgia State, whose professors can now dance gleefully through the ash of their foes in publishing? Not quite. After years of litigation, the case came down to 75 particular items that the publishers argued were infringing. Five unlicensed excerpts (from four different books) did exceed the amount allowed under factor three above. These books include The Sage Handbook of Qualitative Research in both its second and third editions, along with The Power Elite and the no-doubt-scintillating tome Utilization-Focused Evaluation (Third Edition).Inside Higher Ed With Some Updates
While the university had issued a 2009 guide designed to help faculty know when they needed a license for excerpts, the judge found that the policy "did not limit copying in those instances to decidedly small excerpts as required by this Order. Nor did it proscribe the use of multiple chapters from the same book."
Still, copyright and fair use can be murky, and the judge found no bad faith on the school's part, concluding: "The truth is that fair use principles are notoriously difficult to apply."
Update, 5/15: In a conference call with reporters, Rich, along with Tom Allen, the president of AAP, disputed the popular notion that the publishers had "lost" the lawsuit. Before the publishers brought the suit four years ago, Georgia State's standards for e-reserve copying were far more permissive. Only afterward, in anticipation of a court trial, did Georgia State tighten its e-reserves policies, Rich said. During the trial, Judge Evans said she would only consider the fair use merits of instances of alleged infringement that occurred during a specific period after Georgia State had overhauled its practices.My contribution: Georgia Opinion - I see opportunity
Therefore, the judge's ruling was based on legal parsing of examples "that nobody thought would be the focal point of this lawsuit when it was brought,” Rich said. “So for Georgia State to declare victory as to those kinds of works is a false trail.”
While the scorecard might not have favored the publishers, the lawsuit forced Georgia State to shore up its e-reserve practices and confirmed that publishers' copyright protections do indeed apply to e-reserves. And that, Rich said, is not small victory. The lawsuit "was never about drawing the line at this point or that point, but to address a system that basically snubbed its nose at copyright," he said. “At a very fundamental level, that issue has been affirmatively addressed."
Judge Evans has plainly stated that if a publisher's chapter is readily and easily available and the permission is set at a "reasonable price" then the law comes down on the publisher's side. She notes specifically, Copyright Clearance Center which can deliver a permissions fee to the user (faculty, librarian, etc.) via Rightslink and, although CCC does not hold the actual content, publishers will be motivated to create digital repositories at a disaggregated level.Background to the Case:
Chronicle of Higher Ed: What's at Stake in the Georgia Case (2011):
A closely watched trial in federal court in Atlanta, Cambridge University Press et al. v. Patton et al., is pitting faculty, libraries, and publishers against one another in a case that could clarify the nature of copyright and define the meaning of fair use in the digital age. Under copyright law, the doctrine of fair use allows some reproduction of copyrighted material, with a classroom exemption permitting an unspecified amount to be reproduced for educational purposes.Library Journal (2010):
At issue before the court is the practice of putting class readings on electronic reserve (and, by extension, on faculty Web sites). Cambridge, Oxford University Press, and SAGE Publications, with support from the Association of American Publishers and the Copyright Clearance Center, are suing four administrators at Georgia State University. But the publishers more broadly allege that the university (which, under "state sovereign immunity," cannot be prosecuted in federal court) has enabled its staff and students to claim what amounts to a blanket exemption to copyright law through an overly lenient definition of the classroom exemption. The plaintiffs are asking for an injunction to stop university personnel from making material available on e-reserve without paying licensing fees. A decision is expected in several weeks. The Chronicle asked experts in scholarly communications what the case may mean for the future:
According to a ruling on October 1, the closely watched Georgia State University (GSU) ereserves lawsuit will come down to whether the named defendants participated in the specific act of "contributory infringement," as two other original accusations were removed from the case.
This narrows the scope of the charges lodged by the publisher plaintiffs—Oxford University Press, Cambridge University Press, and SAGE Publications—and has Fair Use advocates cautiously optimistic as the case moves closer to trial.
In a blog post, library copyright watchdog and Duke Scholarly Communications Officer Kevin Smith wrote that he was "surprised at how favorable the ruling issued yesterday is to Georgia State; even though the Judge clearly expects to go to trial, there is a lot in her ruling to give hope and comfort to the academic community."
Barring a narrow settlement, the case could have a broad effect on academic library practice. If GSU's current policies are affirmed, libraries nationwide with similar digital reserves policies will be reassured if not emboldened. Should the plaintiffs prevail, however, there is likely to be a considerable chill on Fair Use deliberations as libraries reconsider the digital access they grant to copyrighted materials.
Two levels of infringement tossed out
Judge Orina Evans of Federal District Court in Atlanta ruled against all of the plaintiffs' motions for summary judgment, and granted two of the defendants' three counter-motions.
This ruling essentially holds there to be insufficient evidence to show that the named defendants (GSU's president Mark Becker, provost, associate provost for technology, and dean of libraries, Charlene Hurt) committed any acts of infringement, thus ruling out a charge of "direct infringement."
Likewise, Judge Evans similarly determined that there was no evidence of any profit directly from infringement committed by librarians under their supervision, excluding "vicarious infringement."
Monday, May 14, 2012
Georgia On My Mind: Fair Use, Digital Availability & Reasonable Pricing
In April 2008, three publishers Oxford University Press, Cambridge University Press and Sage, filed suit against Georgia State University (GSU) for copyright infringement. The Publishers charged that university officials had facilitated and encouraged the posting of the publishers' works on university websites and, consequently, made this copyright material available for students without compensation to the publisher. While only three publishers were part of the suit, the case has been closely watched by both sides in the case: The three publishers being generally representative of all academic and scholarly publishers and GSU as representative of educational institutions particularly academic libraries. Suing your customers is a very unsavory practice and generally both frowned on and generally only taken as a last resort. The publishers felt that this case represented a slippery slope in the expansion of the application "fair use" within academia that could fully undermine their own business models and was thus worth fighting despite the potential for negative fall-out.
The case as adjudicated is victory for GSU although there may be some significant caveats which will become be even more important as the publishing business accelerates towards more electronic availability and delivery. Firstly, however this is how Judge Evans summed up the case (Copy at InfoDocket):
The publishers only proved five of the 99 infringements and will be very disappointed by this result. Further, their financial claims may be marginalized later by the Judge; in which case, they are not likely to gain any significant financial 'reward' for these five infringements. (Who would pay in any case is also a question since the Judge affirmed sovereign immunity but that's above my pay grade).Of the 99 alleged infringements that Plaintiffs maintained at the start of trial, only 75 were submitted for post-trial findings of fact and conclusions of law. This Order concludes that the unlicensed use of five excerpts (of four different books) infringed Plaintiffs’ copyrights. The question now is whether Georgia State's 2009 Copyright Policy caused those infringements. The Court finds that it did, in that the policy did not limit copying in those instances to decidedly small excerpts as required by this Order. Nor did it proscribe the use of multiple chapters from the same book. Also, the fair use policy did not provide sufficient guidance in determining the “actual or potential effect on the market or the value of the copyrighted work,” a task which would likely be futile for prospective determinations (in advance of litigation). The only practical way to deal with factor four in advance likely is to assume that it strongly favors the plaintiff-publisher (if licensed digital excerpts are available). The Court does believe that Defendants, in adopting the 2009 policy, tried to comply with the Copyright Act. The truth is that fair use principles are notoriously difficult to apply. Nonetheless, in the final analysis Defendants' intent is not relevant to a determination whether infringements occurred.
In her explanation, Judge Evans did present some important qualifications in her interpretation (based on the Campbell case which defined four criteria) of the fair use determination.
The most interesting interpretations to me were the following (pages 87-89): Firstly, on the amount of content that could be used under fair use, the Judge stated the following:
Where a book is not divided into chapters or contains fewer than ten chapters, unpaid copying of no more than 10% of the pages in the book is permissible under factor three. The pages are counted as previously set forth in this Order. In practical effect, this will allow copying of about one chapter or its equivalent. Where a book contains ten or more chapters, the unpaid copying of up to but no more than one chapter (or its equivalent) will be permissible under fair use factor three.That suggests to me that publishers will be encouraged to disaggregate their content into chunks so that each chapter stands independently. Hard to do in print, this is entirely possible electronically (as part of the publishers digital strategy). Which brings me to the second item of interest in the case:
Unpaid use of a decidedly small excerpt (as defined under factor three) in itself will not cause harm to the potential market for the copyrighted book. That is because a decidedly small excerpt does not substitute for the book. However, where permissions are readily available from CCC or the publisher for a copy of a small excerpt of a copyrighted book, at a reasonable price, and in a convenient format (in this case, permissions for digital excerpts), and permissions are not paid, factor four weighs heavily in Plaintiffs' favor. Factor four weighs in Defendants' favor when such permissions are not readily available.Judge Evans has plainly stated that if a publisher's chapter is readily and easily available and the permission is set at a "reasonable price" then the law comes down on the publisher's side. She notes specifically, Copyright Clearance Center which can deliver a permissions fee to the user (faculty, librarian, etc.) via Rightslink and, although CCC does not hold the actual content, publishers will be motivated to create digital repositories at a disaggregated level.
Anything connected with content and digital continues to move apace and who knows what the practical impact of this ruling will be as more and more content is digitally available and traditional frameworks around which content is organized begin to erode. The traditional monograph and textbook construct will dissipate and this ruling might seem to give that transition impetus.
CCC has been trying to move institutions towards campus wide licenses and this business model has proceeded fittingly over the past three or four years. I suspect this program will become much more interesting to many more administrators given this ruling. In Canada, Access Copyright has attempted to unilaterally apply the all-in-model for schools there but has faced tough opposition over the pricing structure. Some schools have been asked to pay several multiples of the amounts they were paying under the old pay-as-you-go model. As the kinks are worked out, Access Canada is likely to sign up most of the schools in Canada to this program. The UK has had the universal license program from many years.
There's no doubt the application of fair use will continue to generate friction between content owners and (in this case) educators and librarians but then technology continues to advance as well making all of this content both accessible and trackable. Publishers might be able to live with 10% fair use if they can track and monitor the users but to do that they will probably have to universally participate in agencies like CCC and Access Copyright.
Wednesday, November 09, 2011
Beyond the Book: How Social Media is Keeping Alive the Journal Article
From CCC's Beyond to Book series, How Social Media is Keeping Alive the Journal Article:
Scholarly communication is rapidly changing, and information managers in private companies and other sectors are finding new ways to serve their users. Social media, mobile devices, data mining, semantic technologies and other developments are creating a whole new environment for publishing. Yet the old standby – the journal article— seems to have no real rival yet.
In this edition of special programming from RightsDirect, CCC’s European subsidiary, Madrid-based Victoriano Colodrón speaks with Hervé Basset, a Paris-based expert in scientific information management, who blogs in English and French, and is currently writing a book on social media for the pharmaceutical industry.
Basset tells Colodron how the increasing professional use of social media by company researchers is influencing the use of more traditional sources of information, including scientific journals. He also explains why the growing use of social media is changing the role and the work of corporate information professionals.
Link to the Audio
Download Transcript
Wednesday, November 02, 2011
How to reform copyright
Lewis Hyde in the Chronicle of Higher Ed has some interesting observations and proposals for reforming copyright:
Focusing on the benefits of an initial registration requirement tells only one part of the story. Whenever copyright offers a second term, the renewal formality has even stronger commons-enhancing effects. After all, the commercial value of most creative work is exhausted fairly early. A study done of copyrights registered in 1934 found, for example, that half of them were worthless after 10 years, 90 percent after 43 years, and 99 percent after 65 years. It should consequently come as no surprise that many rights holders did not renew after the initial 28-year term. The numbers vary year to year and by genre (music rights being renewed more often than books, for example), but roughly speaking, for most of the 20th century, when owners were given a right to renew, only 15 percent chose to do so. As with initial registration, a renewal formality serves as a filter, releasing commercially dead work to the public without depriving authors of a longer term if they wish to have it. Put another way, formalities effectively shortened the term of the copyright grant during most of the last century; 85 percent of copyrights lasted only 28 years.and this,
The last time that Congress added years to the term of copyright, a group of economists, both liberal and conservative (including five Nobel laureates), filed a brief with the U.S. Supreme Court arguing that the extension made no economic sense. (Milton Friedman supposedly asked that the brief contain the phrase "no brainer.") It is patently clear to almost everyone that the term of copyright is now senselessly long. At the same time, it is almost certainly politically impossible to retreat from it; the few who benefit are too well connected, and the many who do not are too thinly spread. To my mind, the greatest appeal of new-style formalities, then, is that they would leave the nominal term untouched (and accord it to all who care) while greatly reducing the effective term. Sprigman calculates that during the 20th century, when the vast majority of rights holders did not avail themselves of the renewal option, the effective term of copyright was only 32 years. That's just four years longer than the nominal term the founders offered in 1790.
Wednesday, August 10, 2011
Beyond the Book with Elsevier's Rafael Sidi
From his beyond the book series, Chris Kennealy interviews Rafael Sidi, Elsevier VP of Product Management for Applications Marketplace and Developer Network.
“We are letting [researchers] play with our data and build on top of our data stuff that they need to build. In the end, scientists and researchers know their problem better than us.”
Sidi cited a variety of innovative application efforts, including for SciVerse, which offers developers access to Elsevier content, and the community driven projects Apps for Science Challenge and Apps for Library Idea Challenge. (Interview)
Some clips from the transcript of the interview:
So what we are trying to do with the data, we want to give access to our data as we’ve been giving, to make that data easily remixable, reusable among the developers. And wanting that, I’ve been saying that if we let the data to be used by the scientists, by the researchers within our environment, they are going to be able to create much, much better solutions. They are going to be able to create solutions that we couldn’t have imagined.
So what we are doing is just we are going to the crowd. We are letting them play with our data and build on top of our data stuff that they need to build, because at the end, scientists and researchers, they know their problem better than us, in some cases, and what we are doing, we are giving them the tools and we are providing the services for them in this application and developer network in our framework so that they can build using our services and tools.
....
Good question. What we are trying to do right now is to reach out to the community, to the crowd. We’ve been doing different challenges. I mentioned we had a challenge at Rensselaer Polytechnic Institute. Our first one was at New Jersey Institute of Technology. And what we are doing right now, currently we have two different challenges going on.
One, we call it Apps for Science. It’s a challenge among six countries where we are asking developers to submit applications and then we are giving them prizes. And the other challenge that we are doing among our librarians, Apps for Library. So we are asking to librarians to submit ideas. And we are going to – again, a judging committee is going to pick the ideas and then what we are planning to do, some of the ideas we are going to go to our developer network and develop, and those ideas are going to be developed by the developer network.
So, so far, we’ve seen an excellent biomedical image search application that is going to be built by the University of Madison, Wisconsin. So we are getting some ideas that we haven’t thought about it.Just recently, we launched a new app from a company called iSpeech and the app takes the text and then translates to words, so you can just hear the text. And that’s also very important for us in terms of accessibility to the content, making the content easily accessible to everyone. So I if I have an impairment, then I can listen to the text.
Labels:
CCC,
Data,
Journals,
metadata,
Rafael Sidi,
Reed Elsevier
Monday, April 04, 2011
MediaWeek (Vol 4, No 14): Long Distance Learning, OpenSource Textbooks, CCC, Harpercollins
Forbes takes a look at the rapidly expanding long distance learning market in India (Forbes):
Reuters Special Report: Nic Callaway the publisher of the Madonna "Sex" book now building book Apps Reuters
Gallimard: 100 years in publishing Guardian
The $260 million (market cap) Everonn uses a satellite network, with two-way video and audio. It reaches 1,800 colleges and 7,800 schools across 24 of India's 28 states. It offers everything from digitized school lessons to entrance exam prep for aspiring engineers and has training for job-seekers, too. "Never in my wildest imagination did I ever think I would be doing what I am doing today," says 49-year-old Kishore, who along with his family owns nearly 19% of the company. "When I started out I would have been happy if I'd reached 50 schools in south India."Edutopia opines about open source textbooks:
Everonn debuted on FORBES ASIA's Best Under A Billion list in 2010. Revenues for the first three quarters of this fiscal year, through December, rose to $65 million--from $40 million the previous year. Profits touched $9.2 million--up from $6.1 million last year.
The Argument for Open-Source Curricular Materials:Washington Post on Orphans:
The week this announcement was made, Edutopia had an article on the use of open source curricular materials – a growing trend being driven, in part, by the extraordinary cost of commercial textbooks. The argument for open curriculum has many elements in common with the argument for the increased use of open-source software. The most obvious feature of free open source (FOS) materials is the lack of cost for the materials themselves – most open-source content is free of cost in digital form. Historically there has been a tradeoff: low-cost (or free) comes at the expense of quality. (In other words, "There is no free lunch.") But FOS is different. Indeed, I've long argued that FOS software has the advantage of being free of cost, while, at the same time, providing greater value to the users.
This Lunch Is Not Only Free, It's Really Good:
The pairing of high quality with reduced cost seems counter-intuitive at first glance, but makes sense once you look into the open source community more deeply. Many of the developers and maintainers of open source materials are people who use these materials themselves, and thus have a strong interest in keeping the quality as high as possible. Historically this has been true since the creation of the Oxford English Dictionary – arguably the definitive dictionary of the English language whose entries were (and are) submitted by language fanatics, making it one of the largest and earliest open-source documents.
This may well be a practical solution, but the issue should not be Google’s to decide. As the lawfully elected representatives of rights holders and readers, Congress is best positioned to determine how copyright should apply in this case. An essential piece of any such solution is a body, similar to the recording industry’s ASCAP, that would be able to search for rights holders, disperse funds and oversee collective licensing of copyrighted works. This is an accepted strategy for exactly such situations, where an opt-in approach would be prohibitively onerous.And Tracey Armstrong CEO of CCC comments on the above that this entity already exists (WAPO):
In fact, such an organization has been in existence for more than 30 years: the Copyright Clearance Center.Mercury News on Orphan legislation:
However, Google might choose to a drop its court efforts altogether and take its cause to the legislative branch, one that would benefit the public interest.Cory Doctrow in the Guardian on loaning eBooks:
This new strategy would be to have Congress pass legislation that would primarily make orphan works available to the public. Congress has considered similar legislation before, once in 2006. At that time, the U.S. Copyright Office advocated that after a thorough search failed to uncover the rightsholder, orphan books should be made available to the public. The legislation stalled because Congressional policy makers wanted to see how the Google Books case would play out in the courts.
Now that the outcome is known, Congress can act. Legislation would not only allow Google and commercialized enterprises from digitizing works, but libraries and universities too.
Allowing these organizations to scan out-of-print books and make millions of printed works readily available to the public will usher in an era of digital enlightenment.
Now, in point of fact, many ordinary trade books circulate far more than 26 times before they're ready for the discard pile. If a group of untrained school kids working as part-time pages can keep a copy of the Toronto Star in readable shape for 30 days' worth of several-times-per-day usage, then it's certainly the case that the skilled gluepot ninjas working behind the counter at your local library can easily keep a book patched up and running around the course for a lot more than 26 circuits.From the twitter:
Indeed, the HarperCollins editions of my own books are superb and robust examples of the bookbinder's art (take note!), and judging from the comments of outraged librarians, it's common for HarperCollins printed volumes to stay in circulation for a very long time indeed.But this is the wrong thing to argue about. Whether a HarperCollins book has the circulatory vigour to cope with 26 checkouts or 200, it's bizarre to argue that this finite durability is a feature that we should carefully import into new media. It would be like assuming the contractual obligation to attack the microfilm with nail-scissors every time someone looked up an old article, to simulate the damage that might have been done by our careless patrons to the newsprint that had once borne it.
Reuters Special Report: Nic Callaway the publisher of the Madonna "Sex" book now building book Apps Reuters
Gallimard: 100 years in publishing Guardian
Subscribe to:
Comments (Atom)