Feeds:
Posts
Comments

Archive for July, 2012

Defensible Deletion or Defensible Storage?Defensible deletion is one of those topics lawyers have been hearing about a lot lately. Unfortunately, it’s also a subject that just means headaches for lawyers and needs to die. Just this week, a poorly run deletion policy has had an enormous and possibly devastating affect on a major lawsuit. A Federal judge has hit Samsung with an adverse inference in its high-profile suit against Apple because it failed to stop its automatic email deletion program in time. “In effect, Samsung kept the shredder on long after it should have known about this litigation,” the judge wrote.

It has been a decade since Anderson Consulting went out of business after it was caught indefensibly deleting Enron documents. Since that time, records managers have invested a lot of time and energy in developing “defensible deletion” strategies. Deidre Paknad, Director of Information Lifecycle Governance Solutions at IBM recently wrote about defensible deletion for Forbes. She writes that defensible deletion is, “helping companies curb storage growth and cut costs, increase eDiscovery efficiency, and ensure that regulatory obligations for information are satisfied.” At the eDiscovery Journal, Barry Murphy wonders what to call the process–  defensible deletion, active expiration, or defensible disposition. Whatever it’s called, it has no place for litigators or in the context of eDiscovery.

(more…)

Read Full Post »

An eDiscovery Primer for Solo & Small Firm

Download Free Today!

Get Our Free White Paper to Learn How.

Small firms might imagine eDiscovery is like a John Grisham movie, where a plucky young attorney practicing law in a rundown office is hopelessly outgunned by a large, well-heeled law firm. There certainly was a time when big law firms could count on bigger budgets and staff to out-muscle opposing counsel at smaller law firms. But you can now download a free, 10-page white paper to find out how changes to state and federal rules have made it possible for even solo firms to offer complete eDiscovery service.

Titled eDiscovery: Leveling the Playing Field, it provides an overview of what a small firm needs to know in order to keep costs low, get the information needed, and offer competitive eDiscovery services in any context.

The paper points attorneys to the important case law and Federal Rules of Civil Procedure (which most states have copied), definitions of important terms, and litigation tips that every attorney should know before going into discovery. Most importantly, lawyers are shown how to make the rules work for them–taking advantage of cooperative discovery practices to keep costs low while obtaining the digital records needed to make a case. The paper is an introduction–not just to the rules and case law small firm attorneys needs to know, but to a philosophy that allows smart attorneys to secure a just and fair decision for clients in the face of a deluge of digital evidence.

Written in plain English, with a glossary of common eDiscovery jargon, experienced litigators will find it a useful refresher, while attorneys with no experience in the eDiscovery process should find the information needed to make discovery projects manageable. Download it today.

Read Full Post »

Tom Chernaik is CEO of CMP.LY. The company’s unique disclosure solution is built on a foundation of Tom’s experience and insights in marketing, law, social media and entrepreneurship. Tom is Co-Chair of the Members Ethics Advisory Panel of the Word of Mouth Marketing Association (WOMMA) and is a frequent speaker at events on social media ethics and disclosure.

Nextpoint: Why can social media disclosure be such a headache when it comes to regulatory compliance?

Tom Chernaik: There are three sources of headaches: Understanding of the appropriate regulations and guidelines; developing a social media policy aligned with business objectives; and assuring adherence to that policy.

Depending on industry sector, US companies may need to address the requirements of the Federal Trade Commission (FTC), the Securities Exchange Commission (SEC), the Financial Industries Regulatory Authority (FINRA) and/or the Food and Drug Administration (FDA). While the policies of each agency continue to evolve, sufficient guidance has been provided for companies to develop compliant processes for social media communications.

Many companies see the regulations as reasons to significantly limit their social media activities. While marketers are beginning to realize social media as an effective means to build customer relationships, increase awareness for their brands and boost sales, a lack of clarity on how to implement compliant social media programs often holds companies back or exposes them to unnecessary risk or liability. The companies that don’t resolve these conflicts are facing competitive disadvantages that will become more significant in the near future.

(more…)

Read Full Post »

Pricing for eDiscovery and technology services in litigation is not a straightforward matter. But it should be.

Unfortunately, many lawyers and corporations don’t know much about what eDiscovery costs or what services a vendor provides until after litigation begins. Thanks to a recent ruling, In re Aspartame Antitrust Litig., 416 Fed. Appx. 208 (3d Cir.2011)we have at least a peek behind the curtain. In this case, the court was asked to calculate recoverable eDiscovery costs, and in doing so, provided an itemized list of the costs three seperate parties incurred in one large, but fairly typical case.
Lawyers Wasting Money

In this matter, three defendants were awarded almost $800,000 in eDiscovery and related technology costs for services ranging from the restoration of backup tapes to data extraction. However, it’s not clear that all of these charges were necessary.

Fighting Over Pointless Charges
The eDiscovery industry is still very fragmented and there are a lot of companies that provide very limited services for what seems like an extreme cost. For example, there are companies in eDiscovery that create software products to solve just one problem- like identifying duplicate emails in a data collection. In this matter, the defendants paid tens of thousands of dollars to a large number of vendors for services that should be, theoretically, part of an integrated solution.

In the In re Aspartame Antitrust Litigation, much of the cost was associated with processing and production. One party paid over $34,000 just to use a software for data extraction and processing. Additionally, Bates Labeling, Confidentiality Coding and “Production Support Services”  are an additional line item on the list of charges. The same goes for OCR and conversion of TIFF to PDFs. All of these services are included with a Discovery Cloud license. Users can perform these functions themselves at no charge, or pay our services team by the hour to process data.

(more…)

Read Full Post »

” A recent study completed at Boston University School of Law concluded that the cost of defending against patent trolls in 2011 totaled $29 billion in the U.S. According to the participants of the Article One’s Napa Summit, a single NPE (N.B. this is a euphemism for patent troll) lawsuit can cost a company an average of $1.1 million.”
– “6 Ways To Work More Effectively in the U.S. Patent System”, Elizabeth Hampton, Corporate Counsel

Having worked on a number of matters involving patent litigation, it’s clear that defense counsel has an opportunity to significantly reduce cost in managing these suits. Unfortunately, the financial incentive is not there to do so, but there is a clear opportunity to defend intellectual property more efficiently.

The Article One Partners report noted above documented their findings and generated six takeaways to increase efficiency. Unfortunately, there’s a fundamental problem that inhibits efficient defense in intellectual property cases.

Reduce Litigation Defense Spend Through Collaboration

It begins with the first takeaway to “reduce the costs of litigation defense through collaboration.” Makes sense right?  But it’s not quite so easy. Law firms have a terrifically hard time collaborating with co-counsel, even in situations where there is no potential for an adversarial outcome.

In their DNA, law firms are built specifically to contest the demands of outside law firms. Lawyers are trained to do everything in their power to deny requests from other firms. ESI, evidence, documents, email, spreadsheets whatever you want to call it — law firms are designed NOT to share data.

One by-product of this is technology that inhibits collaboration. The normal process we see in joint defense groups is to finally get around to deploying a shared platform from which to do discovery or prepare for trial far after the commencement of the litigation – doing it wrong the first time, and then spending a lot of money to get it right. That means clients pay double.

There’s no need to wait. Get it right from the start. Start out with an architecture that encourages co-defense to share data, not prevent it.  And start with a platform to get through trial – not just a point solution. The time savings for attorneys and staff when multiplied across defendants will far outstrip the costs. And when the cost is running $29 billion annually – there is plenty of room for improvement.


Rakesh Madhava is CEO of Nextpoint, which he founded in 2001. Rakesh has more than 17 years of experience in legal technology and trial support with specific experience in white collar criminal defense, breach of contract, intellectual property, shareholder suits, tortious interference suits and product liability. During his career, Rakesh has been retained by a number of Fortune 500 companies including Exxon, Verizon, Morgan Stanley and Astellas Pharmaceutical as well as many of the top law firms in the nation. Notable cases Rakesh has personally supported include U.S. v. Ryan, U.S. v. Razmilovic, et al. (Symbol Technologies), Coleman (Ron Perelman) v. Morgan Stanley, and the Tobacco Fee Arbitrations.

Read Full Post »

Somehow, editors across the country have gotten the idea that computers will replace lawyers in litigation. The Wall Street Journal asked, Why Hire a Lawyer? Computers Are Cheaper, and The New York Times promised a world of Armies of Lawyers, Replaced by Cheaper Software. Columnist Paul Krugman even picked up the theme to discuss the economy. Most recently, the New Scientist suggested that Lawyerbots Takes the Drudgery Out of Law.

eDiscovery Review-bots?

It’s certainly a compelling narrative, but the discussion is obscuring the real issues complicating litigation. (We’re not sure what a lawyerbot is, but it is fun to imagine the Pentagon has a prototype Johnny 5 in a lab somewhere wearing a pinstripe suit, loafers, and a laser cannon.)

None of this is really about robots or computers taking legal jobs. The topic is the use of Machine Learning in litigation to review documents. The review process is the most painful, expensive, labor intensive, and costly phase of discovery, but the problem isn’t whether machines or humans will do it better. In fact, research shows that humans and computer assisted review can be equally as ineffective. For all the attention it gets, predictive coding has only been used in a handful of cases involving especially large data sets.

It’s the Process, Not the Technology

While “predictive coding” sounds like a great new technology designed to save litigation, it’s actually a process. Predictive coding is the application of advanced machine learning algorithms, using computers to assist in the relevancy review process by recognizing responsive documents in eDiscovery. But that only works if human reviewers examine sample documents the computer returns, and then use that material to train the computers to find similar documents. It’s an iterative process that must be repeated over and over with careful input from humans. The image of a lawyerbot is fun, but what it means is that a smaller number of lawyers will work more closely in training their computers. But it certainly does not mean computers will replace humans.

Not Ready for Prime Time

Jason Baron is the National Archives’ Director of Litigation and is one of the founding coordinators of the TREC Legal Track, a search project organized through the National Institute of Standards and Technology to evaluate search protocols used in eDiscovery. He recently spoke at the Seventh Circuit Electronic Discovery Workshop on Computer-Assisted Review. “Again let me be clear: I remain a strong cheerleader and advocate for advanced forms of search and document review,” he said. “But there are dozens of open questions that remain for further research in this area, and would caution against making judicial short-cuts to findings that say this area is essentially a ‘solved problem’ for lawyers. We ain’t there yet.”

Experienced litigators agree. Not surprisingly, Sidley Austin attorney David Breau told the Wall Street Journal, “Computers excel at sifting through a big pile of stuff and sorting it into categories.” But he went on to note that lawyers are still needed to review the documents once they are sorted before turning them over to the other side. The most pernicious problems that bedevil eDiscovery need to be addressed by humans- identifying and protecting work product, identifying sources of data and custodians of that information, and of course, finding the smoking gun emails that makes your case. There is no machine that does these things.

It will take time for Predictive Coding to fully mature. The TREC Legal Track has been cancelled for at least a year, although at least two other studies into advanced search protocols are underway. Courts will eventually settle the predictive coding question. But for now, lawyers need to remain focused on the core problems in litigation: finding platforms that can manage large volumes of evidence, or review processes that can protect for privilege, all while controlling costs. Machine learning can play a role in very large cases, but only as one component of a well-designed eDiscovery process.

Read Full Post »

For science geeks, the real Fourth of July fireworks was the announcement that the Large Hadron Collider in Switzerland had found a particle likely to be the Higgs Boson, the last unobserved particle that completes the Standard Model in physics.

The discovery is a massive achievement that is the culmination of centuries of research, insight, sacrifice, and ingenuity. The Large Hadron Collider (LHC) near Geneva, Switzerland cost $10 billion to build and uses approximately 5,000 supermagnets to fire subatomic particles around a 17-mile, underground tunnel near the speed of light. The particles are smashed inside massively sensitive, 2,000 pound detectors designed to identify the atomic debris from these collisions.

Supercomputers for Supercolliders

The European Organization for Nuclear Research’s (CERN) supercollider is the one of the largest, most complex, and expensive science experiments in the history of the human race. But the real achievement is not simply smashing particles. We’ve been doing that for decades, with varying degrees of precision. This week’s announcement is in large part a triumph of computing power against massive amounts of data.

See the Higgs boson? Supercomputers say it’s there. Probably.

CERN is at the center of the LHC Worldwide Computing Grid, which connects scientific locations in 34 countries on a distributed computing and data storage infrastructure. To give a sense of the computing power needed, the Large Hadron Collider created 1 million atomic collisions per second for extended periods of time, and is trying to identify a particle that appears in perhaps 1 in a trillion collisions. At the end of 2010, the Grid harnessed roughly 200,000 computer processing cores to crunch 150 petabytes of collected data.

Physics for Lawyers

There are important lessons the rest of the world- including the legal community- can glean from this effort. The scientific world has long been a leader in pooling computing resources- it’s not an accident that the Internet as we know it was invented at CERN. Today, CERN is driving the development of next-generation cloud computing platforms with advanced research into grid computing.

The Large Hadron Collider was able to succeed by pooling the knowledge and resources of thousands of the best and most highly trained minds in the world, coupled with immense computing resources. Lawyers will likely never need to process the same volumes of data the LHC generates, but in many ways, their challenge is more complex. Lawyers are dealing with information generated by humans, which is arguably more unpredictable and subtle than subatomic collisions. (At least scientists know they’re looking for a particle with certain characteristics. No one knows what a smoking gun email looks like until they find it.)

When faced with a massive data processing problem, too many laws firm are still trying to buy the computers necessary to crunch the data themselves. That’s no longer the answer, not when there are commercial providers that offer computing resources that can be effectively deployed on demand. The scientific community understands the value of shared resources and cloud computing, and thanks to their efforts the technology is advancing at a rapid pace. The legal world must follow that example or lose the battle with big data.

Read Full Post »

Older Posts »