top of page

8.4.20: Council of Luminaries CLIENT-SIDE

Updated: Aug 24, 2020

The LVN Council of Luminaries is comprised of veteran thought leaders and pioneers in the business of law community. The objective behind assembling this group is to establish a brain trust of recognized, experienced and progressive leaders who can help interpret developments in the market, provide advice on how to successfully approach and navigate challenges, predict future trends and changes to prepare for, and to participate in targeted special projects to benefit the legal industry as a whole.

The below meeting was conducted primarily with the Council Of Luminaries CLIENT-SIDE group.

Has getting “bad data” from your outside legal counsel ever affected your decision to retain that firm in the future? For example, if you are getting usage of a lot of wrong task coding (e.g., lots of time dumped into L120), or improper task code setup, has that affected your decision either in the past, or potentially in the future to retain that firm again?

Are there penalties for bad data?

Luminary 1 – It would depend on what happened. If it’s deliberate - playing fast and loose- that would cause me a lot of concern. Would I fire them? Maybe? I’d slap on the wrist for sure. If firms are going to be sloppy or loose, that will clearly have an impact.

Luminary 2 – If they are doing it on purpose, of course. We held a process improvement session with the law firm and my people to see where firm date entry can be improved. Maybe there are too many steps, maybe a partner didn’t realize how important it is to get things right. Better data helped the law firm collect faster, and saved time due to less back and forth. If they still don’t get it after that, I’d have a pretty tough conversation.

For international billing, they really did have to get that right. If they stopped getting that right, we’d have a serious conversation

Luminary 3 – Ditto. If it was something that I could definitely say the mindset was there to do it purposely, that’s a real problem. Most of the time it is sloppiness. But – and maybe this is lack of training – is the sloppiness is not knowing what codes to use? Or overusing codes as catchalls? Anything can become “analysis and strategy” or “doc review” if you let it. Maybe not purposeful coding problems, but lack of training.

Luminary 4 – I don’t have an example, but when it comes to things like this – it often hurts the firms more than the client. Client looks at the overall impact to the budget. But if the firms are relying on these codes to model pricing, it hurts them more than anything else.

Luminary 2 – The other reason it hurts them, is the time wasted and the time to collect. It’s so much better if the data is correct. A 2-hour meeting with a partner where you complain is 2 hours that partner can’t bill. They are often clueless. Some partners have said to me “oh I thought you would want the bill late because it’s better for your cash flow.”

A lot of this is just dumping into codes that have always been used, rather than asking how to do it right.

Coding problems and unusable data

Luminary 5 – I have a couple perspectives. 1) We have to make clear the friction for them when they misuse these codes. Our auditors are too good – they will cut inappropriate time coding. Even if the firm gets paid in the end, it takes time, or they get paid at a reduced rate. And it’s not just us. Talk to the legal billing providers. It’s easy to pick up on that stuff. So it’s foolish for firms to not have training around this. We are not trying to play ‘gotcha’. We want them to understand the consequences. 2) When the data is not clear, we do not get the information to inform our own internal processes. When we don’t have that accurate data, it impacts how we view the firm’s effectiveness. What we want to see is error rates under 2-3%. We are not looking to just to cut bills. We are looking for better data. Being clear about that can put you in lockstep with the firms.

Luminary 6 – There’s a more fundamental issue, which is that the codes don’t meet our needs. There is not a correlation between the work done and the codes. Even the most accurately coded invoice would not really tell you, for example, how much time did they spend drafting the summary.” We need a better way. The UTBMS was a construct from decades ago with different tech. I’d rather see some AI-based tool.

Luminary 5 – That’s a great point. In absence of a better construct, some are using NLP on the descriptions, that’s getting us closer.

Luminary 6 – it definitely gets us closer. Examen was doing this years ago before they got bought by LexisNexis, and now we have better Natural Language Processing (NLP). But we are trying to get to “what is a reasonable time to spend on that task.” That’s what I want to know. “On average others can do a letter of intent in 4 hours, but you took 9. Why?”

Luminary 2 – That makes a lot of sense for the law firm. From the client standpoint, that kind of detail isn’t so important to me. But the biggest problems were billing to the wrong matter, wrong code. We didn’t care as much about the bigger picture.

Luminary 6 – It does make sense for in-house because it helps drive the discussions on how to get higher value work from law firms. How do you actually unbundle, get AFAs, etc.

Luminary 5 - If you don’t have good analysis or data, it’s hard to price your AFAs. That’s really where we put it to work. But the biggest bang for our buck – we typically had an advantage over our firms in that we know what it should take to do the work. In fact, we had to bring the firms we are entering into AFAs with up to speed. We don’t want to put one over on the firm; we want it to be fair but we don’t want to over pay either.

Empathy for firms who must deal with complicated codes, billing guidelines, budget forms

Luminary 7 – Has anyone tried using any of those AI-powered systems that use NLP on the descriptions?

Luminary 5 – we do have some modelers using NLP to overlay on our bills. We use CounselLink as our billing platform with mixed results. We do not use Brightflag, but have seen demos and knowledge shares and they’ve done some interesting things to reclassify and refine based on their own code set based on the narratives. I would encourage people who are truly interested in this to take a look at that.

Luminary 8 – You talk to 5 different lawyers and ask them how to code the same thing, you’ll get 5 different variations. How can anyone code something correctly for all different clients who all have their own code sets? “Do firm lawyers just say, “to heck with it too complicated. I’ll take a slap on the wrist.”

Luminary 6 – I know of one firm with enough clients they have 800 different custom code sets.

Luminary 5 – We are really doing ourselves a disservice by not having data standards. Maybe there’s room for something around the edges, but we really need standards.

Luminary 8 – One firm I used to work with used to let their lawyers code the same way. But they had some tech solution fixing it for individual clients in the background. There’s no standard, so a lot of clients are building code sets without input from their firms. We are being unrealistic. If we were on the firm side, we’d be throwing up our hands.

Luminary 5- I understand the pain. Billing guidelines are problematic, too. Someone behind the scenes reprices what everyone’s done to comply with guidelines. I also feel pain of firms on budgeting. Everyone wants a customized budget form.

Luminary 3 – Luminary 8 brought up a good point about how it’s can be difficult to code. For example, on how an expert witness deposition will code – it can be done 100 different ways – but we need to determine a true value around those tasks. How do we isolate the data to understand ‘cost per discrete task’? This keeps us from driving those AFAs harder.

But on the client side we are not helping the issue – these codes were designed for industry standardization. But by each client customizing – we’ve made it impossible for the firms. But how do we start anew?

Luminary 8 – About five years ago we asked our top five litigation firms to help us develop a “should cost” single plaintiff one off product liability matter. How much should an expert deposition cost? How much for a fact witness? All 5 said they’d code differently. We realized at that moment that our e-billing data was no help figuring out task-based, flat fee AFAs. Our ebilling system is good for blocking and tackling and guideline compliance, but not for of value-based billing. We had to start from scratch.

Luminary 3 – Can we limit it to the basics? The discovery codes, the pleading codes – can we just bucket at that level? It’s clearer to me to get to flat fees or staged fees, if you get past the fact there are so many different ways it can be coded. It’s simpler to say, “these are pleadings, this is discovery.”

Luminary 8 – We did some legwork and we basically created our own code set for a single plaintiff product liability matter. That’s how we had to do it.

Luminary 4 – Do you also vary in some level of complexity – there are things that make a particular deposition cost more than the average. How do you capture that?

Luminary 8 – Sometimes you have an expert witness who might not need to be as intensive as some others. They are not all created the same. But there’s a benchmark as a starting point. I have the data, but I still to leverage the lead attorney to understand what’s needed based in the context of a particular matter.

Luminary 3 – If we were to do that, we’d first categorize the cases. We have tons of relatively simple cases. Those will have one set of assumptions. Patent litigation or other complex matters will have different ones. We’d tier it out or adjust based on the level of complexity. There isn’t a technology out there that’s cracked that nut yet. We’re maintaining a lot of this data manually. For better or worse, I maintained an Excel workbook for years. It just got to be too much. Too much is in my head. I’ve been looking for a tech solution to bridge that gap. Even the ones I see that say they are task-based, I don’t see much that really leverage the data for proposal evaluation. I’ve seen good ones on the tracking side, but not for proposal evaluation.

So what does it take for a firm to get fired?

Luminary 8 – How many have fired a firm for anything other than something close to malpractice?

Luminary 2, Luminary 5, Luminary 9, Luminary 3 all raise hands.

Luminary 2 – We’ve done it for lack of diversity

Luminary 1 – If the quality is bad, if it’s a big firm it might be limited to that attorney or group. We have a lot of conflict issues. That’s usually not an issue, but sometimes they ask for a waiver late, or it’s a litigation where it’s difficult to grant or they ask for advance waivers. We don’t necessarily fire a firm, but we won’t use them again.

Luminary 2 – I have a “never hire” file for folks who pushed the envelope too far on conflicts. Or there are instances where the best renowned firms are not as good as advertised in some areas.

Luminary 9 – Unfortunately it’s usually tied to substance. I wish I could say I had the broad authority for diversity. But it’s usually a change in the team or a bad outcome. Or a phased matter that becomes more specialized. It’s not usually something operational.

Luminary 5 – Oftentimes we don’t want to work with an individual lawyer within the firm, so we might wall off that lawyer if we still trust the quality of the firm. But it’s typically a progression. They get multiple strikes. If we audit and the firm has a sky-high errant rate, and their guideline compliance is terrible, we’ll give them a remediation plan. We might not rip any cases from them, but they’re not getting any new ones. Our panel is a living breathing thing – we have overlapping jurisdictions – so we will make a change if needed. And they know things.

Conflict issues

Luminary 8 - How many of you have been fired by a firm?

Luminary 1 – It was a conflict situation. This firm was representing us and another party. Mostly the other party but we were co-defendants. The other party was paying. The firm wanted us to waive conflict so they could later sue us on the same matter. We originally thought it wasn’t a waivable conflict, but our counsel said it was. We still didn’t grant it and they fired us.

Another of our practice areas was using that firm, and they lost their trusted attorneys over the issue. But they understood. Now a lawyer we really like has moved there from another firm. But we won’t reengage.

Luminary 3 - Advance waivers are a no-no. We’ve had firms walk away due to conflicts. For us it’s a balance of how much of our work they are willing to put up with. We only grant advance waivers if a tiny piece of work and a limited area.

Luminary 3 –The reaction of the law firm had a lot to do with how to go forward. If they are defiant, we have a problem. I’ve seen numerous relationships die that way.

Luminary 4 – We have the most productive relationship when there is a strong relationship team at the firm. I look for that team and our team to help us end up in a good place. There needs to be a lot of support from those who lead the relationship from the firm side.

I’ve seen the firms that want to come back after they’ve been let go vs. the firms that want to rebuild. It’s easy to tell the difference. When I was at a firm, I was on the inside of being fired. It’s usually bungling billing, not keeping the client informed. It’s usually something bordering on unethical behavior.

Luminary 5 –Bloomberg and Westlaw have tools where you can easily find out the general mix of matters that firms are handling. And you may be shocked to find out the kind that your firms are doing in other practice areas that may give you pause. It may not be a pure conflict, but Maybe it gives your organization angst if they are taking a position that is antithetical to what you need. Some handle plaintiff matters, for example. Beware if you start using one of these tools. Once you know what they do, you may have to make a decision. Sometimes ignorance is bliss.

48 views0 comments


bottom of page