The Program Manager’s Real Talk Guide to Conflict of Interest in Award Judging
Yeah, it’s awkward. Here’s how to handle it before it handles you.
Picture this. You’ve spent months building your award program. The submissions are strong, your judges are credentialed, and everything looks clean — until someone quietly mentions that one of your panelists used to work at the company that just submitted the frontrunner entry.
Now what?
Do you say something? Swap them out? Hope nobody notices?
This is exactly the moment most programs get into trouble. Not because anyone did something malicious. Just because nobody had a plan.
That’s what this guide is for. Let’s walk through it together — conflict of interest in award judging, what it actually means, why it matters more than most people think, and what you can do about it at every stage of the process.
So, What Even Is a Conflict of Interest Here?
Here’s the honest answer: it’s messier than most people expect.
A conflict of interest (COI) happens when a judge has some kind of personal, professional, financial, or organizational tie to an entry they’re evaluating. That tie could — or could seem to — tilt their scoring, even if it genuinely doesn’t.
That second part is the part that catches people off guard. You don’t need proof of bias. The appearance of bias is enough to do real damage to your program’s reputation.
The U.S. Department of Justice, National Science Foundation, ACM Awards Committee, and ASU’s research administration guidelines all broadly agree on four types of conflicts to watch for:
| Type | What It Actually Looks Like |
|---|---|
| Personal / Relationship | Spouse, family member, close friend, thesis advisor, recent collaborator (think: last four years) |
| Financial | Judge holds a financial stake in an entrant’s org, or has funding ties to related entities |
| Organizational / Institutional | Current or recent employer matches a nominee’s institution — or there’s a parent/subsidiary connection |
| Appearance of Bias | Judge publicly championed a nominee on LinkedIn. Volunteered with their team. No formal tie, but it still looks off |
That last one. That’s the one that trips people up most.
A judge who posts “So excited to see [Company X] nominated — they’re doing incredible work!” hasn’t necessarily done anything wrong. But the moment they score that company’s submission, you’ve got an optics problem. And optics problems have a way of becoming credibility problems fast.
Wait — Is This Actually a Legal Issue?
Sometimes, yes. If your program involves federal funding, COI management isn’t just best practice — it’s required by law. U.S. federal regulations under 2 C.F.R. § 200.112 mandate disclosure of potential conflicts. The NSF requires organizations with more than 50 employees to have written COI policies in place before a single dollar is spent.
Not federally funded? Still worth aligning with these standards. It signals to everyone watching — applicants, sponsors, stakeholders — that your program takes fairness seriously.
Stage 1: Prevention — Build the Safety Net Before Anyone Falls
Here’s a truth that every experienced program manager eventually learns: the best time to deal with a COI is before it exists.
Programs that treat COI as a recruitment issue rather than a mid-process crisis spend a lot less time scrambling later. So let’s start there.
Recruit for Impartiality, Not Just Impressive Bios
Of course credentials matter. But a world-class expert who has cozy ties to half the entrants in your category isn’t actually serving your program. Award Force recommends screening judge candidates for both domain expertise and their ability to evaluate objectively.
Use your judge application form to ask candidates to self-report:
- Any past relationships with likely entrants or their organizations
- Financial interests connected to the award category
- Prior public endorsements of nominations or specific candidates
- References who can speak to their professional impartiality
If your award community is small and everyone knows everyone? ASU’s guide on external solicitation design has a practical fix for that: recruit external reviewers specifically to dilute the overlap. It’s not glamorous, but it works.
Anonymize Everything You Can
This one’s simple, and it’s powerful. Strip identifying details from submissions before judges ever see them — names, gender, institution, location, anything that isn’t directly relevant to the evaluation criteria.
Award Force highlights that anonymization does two things at once: it reduces unconscious bias across the board, and it makes the remaining COIs easier to catch because they’re more likely to be real, substantive connections rather than surface-level coincidences.
Less noise. Cleaner signal. Better decisions.
Write the Policy Down. Actually Write It Down.
A “we’re all committed to fairness here” culture isn’t a policy. It’s a vibe. And vibes don’t hold up when someone files a complaint.
The NSF’s grant administration standards are clear: you need written, enforceable COI policies — not just shared values. Your policy should spell out:
- A clear definition of COI covering all four categories
- Mandatory disclosure requirements with specific timelines (not just “when it feels right”)
- Recusal protocols — exactly what a conflicted judge must do, step by step
- Consequences for non-disclosure — what happens if someone doesn’t come forward
And alongside the COI policy, RQ Awards makes a point worth remembering: publish your evaluation criteria before the program opens. Opaque scoring creates cover for bias. Transparent criteria give you something solid to point to when decisions get questioned.
Every judge should sign both a COI agreement and a confidentiality agreement before they access a single submission. Per ASU’s guidelines, these aren’t formalities. They’re non-negotiable prerequisites.
Stage 2: Identification — Catch Conflicts Before They Become Problems
Prevention is great. It’s not foolproof. People forget to disclose things. New conflicts emerge after assignments go out. Judges don’t always realize their connection is relevant until they’re looking at a specific entry.
So you need multiple chances for conflicts to surface — not just one upfront declaration.
Make Disclosure the Easy, Expected Thing to Do
The ACM Awards Committee’s COI framework requires committee members to disclose COIs before any discussions begin. Not during. Not after someone else notices. Before.
The timing matters more than it might seem. Once a judge has heard arguments and formed impressions, their neutrality is already compromised — even if they recuse themselves afterward.
Build disclosure checkpoints into the process at multiple stages:
- During judge application — self-reported relationships and affiliations upfront
- Upon receiving assignments — judges review their allocated entries and flag anything that raises a flag
- Before deliberations begin — a formal COI check-in at the start of each judging session
The V5RC judging guide puts it well: judges should declare conflicts as soon as they’re identified. Not when it’s convenient. Immediately.
Use Software to Catch What Humans Miss
At small scale, manual tracking is fine. As your program grows, it becomes a liability.
Award Force recommends using award management software to track judge-entry assignments and automatically flag potential conflicts based on disclosed affiliations. The DOJ’s COI guidance suggests a systematic checklist approach — asking judges about family connections, financial ties, and organizational relationships for each assigned entry.
When that checklist is embedded into the assignment workflow, it stops being something people remember (or forget) to do manually. It just happens.
Stage 3: Management — When a Conflict Surfaces, Move Fast
This is where a lot of programs hesitate. Someone discloses a conflict, and the instinct is to slow down — to think about it, to weigh whether it’s “really” a problem, to give the judge the benefit of the doubt.
Don’t. Speed and decisiveness here actually protect everyone, including the judge who disclosed.
What Recusal Actually Means
“Recusal” sounds straightforward. In practice, it means different things to different programs — and that ambiguity creates real problems.
Be specific. The ACM’s conflict of interest policy defines recusal as being entirely absent from discussions about the conflicted entry. Not just abstaining from the vote. Out of the room. Off the thread. Done.
The V5RC framework goes even further — prohibiting conflicted judges from participating in team interviews or reviewing related materials.
Your recusal protocol should explicitly cover:
- Full absence from all discussions related to the conflicted entry — not just the final vote
- No advocacy — can’t argue for or against that entry in any forum, formally or informally
- No access to deliberation notes or scoring data for that entry
- Written documentation of the recusal, including what the conflict was and who made the call
One more thing: if the conflicted party is the committee chair, the ACM recommends that a designated deputy chair takes over management of that discussion. Name the deputy in advance. Don’t improvise this.
Reassignment: Fill the Gap Quickly
Recusal leaves a hole in your judging coverage. Fill it. Award Force recommends using software to reassign entries to non-conflicted judges with as little friction as possible. ASU’s guidelines suggest recruiting backup reviewers specifically to maintain your minimum review thresholds — usually three independent reviews per entry — even when conflicts reduce your effective panel size.
Here’s the practical math: if you need eight evaluators to cover your submissions, recruit twelve. The extra four exist for exactly this scenario.
| Response Step | What It Means in Practice | Key Source |
|---|---|---|
| Immediate recusal | Conflicted judge exits all discussion, scoring, and advocacy | ACM, V5RC |
| Entry reassignment | Re-allocate to non-conflicted judges; maintain 3-review minimum | Award Force, ASU |
| Deputy oversight | Deputy chair steps in if the chair holds the conflict | ACM |
| Independent monitoring | Designated reviewers manage restrictions; disclose interests publicly where appropriate | NSF |
When You Can’t Fully Remove the Conflict
Small programs and volunteer-driven communities face a harder reality. Sometimes your pool is so tight that complete avoidance isn’t possible. Everybody knows everybody.
The V5RC judging guide addresses this honestly: a conflicted judge can still offer general advisory input to the program — they just can’t evaluate the specific entries where the conflict lives. The goal is containment. Limit the conflict’s influence as much as your structure allows, document your reasoning, and apply the policy consistently across the board. No exceptions for “trusted” people.
Stage 4: Transparency and Accountability — Document Like Your Reputation Depends on It
Because it does.
When something gets questioned — and in award programs, something always eventually gets questioned — your documentation is what stands between you and a credibility crisis. Transparency isn’t just an ethical aspiration. It’s a risk management strategy.
What Goes Public
RQ Awards and the Program Evaluation Standards both recommend sharing your judging criteria, your judge roster (where judges have consented to being listed), and a plain-language description of the overall process with stakeholders — both before and after the program.
You don’t need to publish every deliberation note. You do need to show that a fair, structured process happened.
What Stays Internal (But Still Gets Written Down)
Every COI incident deserves a paper trail. For each one, document:
- The nature of the conflict and exactly when it was disclosed
- Who made the recusal decision and what that decision was
- How and to whom the entry was reassigned
- The final scoring outcome for that entry (to confirm the recusal held)
These records do double duty. They protect your organization if a complaint or audit arises, and they give you the data to strengthen your process next cycle. Patterns tend to show up in the same places. You want to see them.
Judge Orientation Isn’t Optional
Before any judge opens a single submission, they need a real orientation. Not a welcome email. A structured session that covers your evaluation rubrics, your COI definitions and disclosure procedures, and your confidentiality requirements.
ASU’s guidelines treat orientation as a required step — non-negotiable, not optional. And the research backs it up: judges who understand the policy before they encounter a conflict are significantly more likely to disclose promptly and correctly.
An ounce of prevention, and all that.
Common Mistakes That Quietly Wreck Programs
These aren’t rare edge cases. They show up in award programs all the time.
“They’re probably fine.” This one’s tempting. The judge is trustworthy. The connection seems minor. You don’t want to make it awkward. But the standard isn’t whether you trust the judge — it’s whether a reasonable outside observer would see a problem. If yes, recuse. Full stop.
Delayed disclosure. The DOJ’s guidance includes a case study of an executive director who stayed involved in vendor selection despite personal ties to vendors. Not malicious — just no clear boundary in place. Disclosure protocols that wait for conflicts to feel “serious enough” always catch them too late.
Treating public endorsements as a non-issue. If a judge has publicly championed a nominee online, the ACM’s approach is instructive: they can stay on the panel, but they must recuse from evaluating that specific nomination. Keep them for what they can objectively evaluate. Remove them from what they can’t. That’s the right balance.
Not building a bench. High-volume programs that don’t recruit backup reviewers end up making bad tradeoffs when conflicts reduce coverage. The ASU recommendation to recruit external reviewers isn’t a nice-to-have. It’s the mechanism that keeps your review minimums intact when the real panel shrinks due to conflicts.
A Word on Software and Tooling
Manual COI management works fine when your program is small. Once it scales, manual tracking introduces exactly the kind of inconsistency and human error that creates liability.
Award management platforms that support anonymization, assignment tracking, conflict flagging, and recusal workflows take a significant operational load off program managers — and they create the audit trail that accountability actually requires.
When you’re evaluating tools, look for:
- Anonymization controls that can be applied per-category or per-submission
- Conflict tracking that cross-references judge disclosures against their assigned entries
- Reassignment workflows that automatically maintain your review minimums
- Audit logs that capture disclosure events, recusal decisions, and assignment changes in real time
At Nobel, we built the platform specifically to support this kind of structured, compliant judging process — giving program managers the controls they need without burying them in administrative overhead.
The Short Version (If You’re Skimming)
Managing COI isn’t about assuming your judges are dishonest. Most COI situations involve genuinely good people in genuinely ambiguous situations with no clear guidance in sight. Your job is to build the structure that removes that ambiguity — so when a conflict surfaces, everyone already knows what to do.
Here’s the framework, distilled:
Key Framework Elements
- Start early. As Award Force puts it, screening during recruitment is “an investment in credibility.” It costs a fraction of what managing a public controversy costs after the fact.
- Create multiple disclosure checkpoints. One upfront declaration isn’t enough. Build moments for disclosure throughout the entire judging lifecycle.
- Act decisively. When a conflict surfaces, the recusal and reassignment process should be swift, consistent, and documented — every single time.

