For quick definitions of key terms used in this guide, see the Crypto Dictionary. Browse the full course here: Fundamental Analysis Hub.
A full crypto project due-diligence checklist is not just a list of questions. It is a structured judgement system. TMU’s full Fundamental Analysis method uses 10 scoring categories, a 0 to 5 scoring scale, evidence-quality flags, outcome bands, override rules, and uncertainty handling. The goal is not false precision. The goal is to make project judgement more disciplined, more comparable, and harder to distort with promotion, incomplete evidence, or one strong narrative layer.
What Is A Crypto Project Due Diligence Checklist?
A crypto project due-diligence checklist is a structured method for judging whether a project deserves serious attention as an investment candidate.
A good checklist does more than collect notes. It forces you to examine the major evidence layers, score them consistently, and state where the case is strong, weak, incomplete, or contradictory.
That matters because crypto research breaks down when it becomes impression-led. A project can look strong because the token is trending, because the founder sounds persuasive, or because one metrics dashboard looks attractive. None of that is enough on its own.
Why Full Due Diligence Matters In Crypto Fundamental Analysis
Full due diligence matters because crypto projects often combine real innovation, weak disclosure, early-stage uncertainty, token-market distortion, and uneven evidence quality all at once.
Common problems include incomplete documentation, immature products, noisy communities, token designs with hidden weaknesses, exaggerated partnership language, and adoption claims that may or may not be durable.
A loose research habit cannot handle that well. Full due diligence forces you to examine the project from several angles at once instead of letting one attractive feature dominate the whole conclusion.
How This Checklist Fits Into The FA Hub Course
This lesson sits after Lesson 11 and before Lesson 13 for a reason. Lessons 1 to 11 taught the major evidence layers that make up project-level Fundamental Analysis. Lesson 12 now turns those separate lessons into one capstone worksheet.
Lesson 1 gave the learner the first-pass beginner framework. Lessons 2 to 11 taught the main research domains. Lesson 12 turns those domains into one full scoring method. Lesson 13 then applies that method through case-study practice.
So this lesson does not repeat the beginner framework and it does not become a full case-study lab. Its role is methodological. It gives the learner the exact TMU worksheet for turning research into a final due-diligence judgement.
The TMU Crypto Due Diligence Worksheet
The TMU worksheet uses 10 fixed categories. Each category is scored from 0 to 5. Each category also receives an evidence flag. After that, the learner totals the score, checks the outcome band, applies override rules, and records uncertainty before writing the final conclusion.
| Category | What It Tests | Course Link |
|---|---|---|
| 1 Project Purpose And Problem | Whether the project solves a real problem clearly enough to justify serious analysis. | Lesson 1 |
| 2 Market And Category Fit | Whether the project makes sense inside a real market category. | Lessons 1 And 11 |
| 3 Market Capitalisation And Valuation | Whether valuation size fits maturity, quality, and evidence. | Lesson 2 |
| 4 Tokenomics And Value Capture | Whether token design is transparent, investable, and linked to value creation. | Lesson 3 |
| 5 Documentation And Roadmap Quality | Whether docs and roadmap materials improve judgement rather than obscure it. | Lesson 4 |
| 6 Team And Execution Credibility | Whether the people behind the project look capable of delivering. | Lesson 5 |
| 7 Partnerships And External Credibility | Whether outside support is real, relevant, and verified. | Lesson 6 |
| 8 On-Chain Activity, Security, Usage Evidence | Whether credible public evidence supports the project case. | Lesson 7 |
| 9 Adoption, Community, Development Activity | Whether the project looks alive, maintained, and capable of sustaining real user interest. | Lessons 8 And 10 |
| 10 Regulation, Competition, Risk Profile | Whether wider structural risks materially weaken the research case. | Lessons 9 And 11 |
The worksheet is designed to reduce loose narrative judgement, create more consistent comparison across projects, and force visible treatment of weak evidence and unresolved uncertainty.
The 10 Core Scoring Categories
Each category asks a different question. Together, they force the learner to look at the project from several angles instead of hiding behind one strong story.
Project purpose and problem asks whether the project is solving a real problem clearly enough to justify serious analysis.
Market and category fit asks whether the project makes sense inside a real market category and has believable market context.
Market capitalisation and valuation asks whether valuation size makes sense relative to maturity, quality, and evidence.
Tokenomics and value capture asks whether the token design is transparent, investable, and linked to value creation.
Documentation and roadmap quality asks whether whitepapers, docs, and roadmap materials improve judgement rather than obscure it.
Team and execution credibility asks whether the people behind the project look capable of delivering.
Partnerships, investors, and external credibility asks whether outside support is real, relevant, and verified.
On-chain activity, security, and usage evidence asks whether there are credible signs of real usage, technical life, and security awareness.
Adoption, community, and development activity asks whether the project looks alive, maintained, and capable of sustaining real user interest.
Regulation, competition, and risk profile asks whether wider structural risks materially weaken the research case.
How To Score Each Category From 0 To 5
Use this scale exactly. Do not score the story. Score the evidence supporting the story.
| Score | Meaning | Research Treatment |
|---|---|---|
| 0 | No usable evidence, severe weakness, or evidence that undermines the category. | Hard weakness unless later evidence changes. |
| 1 | Very weak evidence, major gaps, or serious unresolved concerns. | Strong caution. |
| 2 | Weak or partial evidence with important uncertainty. | Not enough for confidence. |
| 3 | Adequate evidence, but with clear limitations or unresolved questions. | Usable but not strong. |
| 4 | Strong evidence with normal uncertainty. | Positive category signal. |
| 5 | Very strong, well-supported evidence with clear relevance and limited unresolved concern. | Rare score. Use carefully. |
How To Mark Evidence Quality
Every category score must carry one evidence flag. A number without an evidence flag is too easy to overtrust.
| Flag | Meaning | How To Use It |
|---|---|---|
| Verified | The score is supported by clear and credible evidence. | Trust more than the same score with weaker evidence. |
| Partial | Some evidence exists, but it is limited in scope, depth, or reliability. | Reduce confidence if the missing detail could change the verdict. |
| Missing | The evidence needed for a confident score is not really there. | Do not reward the category for absence of proof. |
| Contradictory | The evidence conflicts in a material way. | Reduce trust until the conflict is resolved. |
| Not Applicable | The category is not meaningfully relevant in the normal way. | Use sparingly and explain clearly. |
Outcome Bands: Pass, Caution, Watchlist, Or Fail
After scoring all 10 categories, total the result out of 50. The outcome bands give the learner a disciplined starting point before override rules and uncertainty handling are applied.
| Total Score | Band | Meaning |
|---|---|---|
| 40 To 50 | Pass | The project survives full due diligence well enough to be treated as fundamentally credible, subject to override rules and uncertainty handling. |
| 30 To 39 | Caution | The project has enough quality to stay alive as a serious candidate, but unresolved issues still matter. |
| 22 To 29 | Watchlist | The project is not strong enough yet, but may deserve monitoring if evidence improves. |
| 0 To 21 | Fail | Too many categories are weak, too much evidence is missing, or the risk profile is too compromised. |
These bands are not buy or sell signals. They are due-diligence outcome labels that help discipline the research process.
Override Rules That Can Cap The Final Result
Some weaknesses are too important to let the raw score hide them. Apply these override rules before trusting the total.
The project cannot receive a Pass until supply clarity improves.
Cap the partnerships and external credibility category at 2 until the claim is verified.
Cap the tokenomics and value-capture category at 2 until the token case improves.
The overall outcome should not exceed Caution, depending on severity.
The overall outcome should not exceed Caution until the access risk is clarified.
The overall outcome may be capped at Watchlist or Fail depending on severity.
Other caps may apply when a product is strong but the token link is weak, evidence is contradictory, or several moderate warnings cluster together. The override rules exist to stop the worksheet becoming false precision.
How To Record Uncertainty Before Reaching A Conclusion
Uncertainty handling is not optional. It is part of the method.
Before writing the final conclusion, record what is verified, what is partial, what is missing, what is contradictory, what is not applicable, what would change the score, what would cap the outcome, and what should be re-checked later.
This step forces the learner to show where the case is real and where it is provisional. Uncertainty should reduce confidence, not quietly inflate optimism. If an important category is mostly partial or contradictory, say so directly.
A Compact Worked Demonstration
Here is one compact realistic-composite example showing how the method works without turning this lesson into a full case-study lab.
Project: Northstar Compute.
Category: Mid-stage infrastructure token with strong technical materials, believable developer activity, unresolved token value-capture questions, and one major partnership claim not yet verified.
| Category | Score And Flag | Research Note |
|---|---|---|
| Project Purpose And Problem | 4, Verified | Clear problem and coherent product logic. |
| Tokenomics And Value Capture | 3, Partial | The token has a plausible role, but the value-capture link is not convincing enough yet. |
| Partnerships And External Credibility | 3, Partial | One major outside-support claim looks impressive, but the category is capped at 2 until verified. |
| Adoption, Community, Development Activity | 4, Verified | Credible development continuity and healthy technical community signals. |
Strength: strong purpose clarity and credible development activity.
Concern: partnership credibility is overstated until verified.
Uncertainty: token value capture is still only partially evidenced.
Even with several strong features, the project should not be treated as a high-conviction Pass candidate yet because partnership verification is incomplete and token value capture remains only partially supported.
Common Due Diligence Mistakes To Avoid
The most common due-diligence mistakes are not about missing one detail. They are about breaking the method.
A strong story should not receive a strong score unless evidence supports it.
Price attention can distort judgement if the underlying evidence is weak.
A strong team, metric, or product claim cannot hide weak tokenomics, missing supply clarity, or poor external verification.
Missing evidence should reduce confidence. It should not be filled with optimism.
Some weaknesses cap the final result even when the raw total appears strong.
A good worksheet should make judgement more disciplined, not more excited.
Practical Crypto Due Diligence Checklist
Use this compact operational sequence each time you assess a project.
Do not skip weak areas because another area looks strong.
Score evidence quality, not the attractiveness of the story.
Use Verified, Partial, Missing, Contradictory, or Not Applicable.
Record what actually supports the case and what still weakens it.
Apply caps for supply opacity, weak token value capture, unverified partnerships, central access risk, team failure, or unresolved contradictions.
State what is verified, partial, missing, contradictory, what would change the score, and what must be re-checked later.
Use Pass, Caution, Watchlist, or Fail only after caps and uncertainty have been considered.
State the result in clear terms, including the main strength, main weakness, evidence quality, and what would change the view.
This is the practical worksheet flow. It should be repeatable across projects, not improvised from scratch each time.
How This Prepares You For Case Study Practice
This lesson prepares the learner for Lesson 13 by turning all prior FA layers into one working judgement system.
The learner should now be able to score a project across all 10 categories, mark evidence quality properly, spot when the raw total is misleading, apply capping rules without emotional bias, separate score from confidence, and write a short due-diligence conclusion.
Lesson 13 will apply this system in case-style practice. The purpose of Lesson 12 is to lock the worksheet itself so the learner enters the lab with a clear method rather than loose impressions.
Discussion