Key points
Lesson 12 is the capstone worksheet lesson of the Fundamental Analysis Hub.
It turns Lessons 1 to 11 into one structured due-diligence worksheet and scoring method.
The worksheet uses 10 fixed categories, each scored from 0 to 5.
Every score must be paired with an evidence flag, not just a number.
The final result uses outcome bands, but override rules can cap the outcome even when the raw score looks strong.
Uncertainty must be recorded explicitly before reaching a conclusion.
Quick Answer

A full crypto project due-diligence checklist is not just a list of questions. It is a structured judgement system. TMU’s full Fundamental Analysis method uses 10 scoring categories, a 0 to 5 scoring scale, evidence-quality flags, outcome bands, override rules, and uncertainty handling. The goal is not false precision. The goal is to make project judgement more disciplined, more comparable, and harder to distort with promotion, incomplete evidence, or one strong narrative layer.


What Is A Crypto Project Due Diligence Checklist?

A crypto project due-diligence checklist is a structured method for judging whether a project deserves serious attention as an investment candidate.

A good checklist does more than collect notes. It forces you to examine the major evidence layers, score them consistently, and state where the case is strong, weak, incomplete, or contradictory.

That matters because crypto research breaks down when it becomes impression-led. A project can look strong because the token is trending, because the founder sounds persuasive, or because one metrics dashboard looks attractive. None of that is enough on its own.

Clean rule: A full checklist exists to stop loose judgement. It turns scattered research into a controlled final view.

Why Full Due Diligence Matters In Crypto Fundamental Analysis

Full due diligence matters because crypto projects often combine real innovation, weak disclosure, early-stage uncertainty, token-market distortion, and uneven evidence quality all at once.

Common problems include incomplete documentation, immature products, noisy communities, token designs with hidden weaknesses, exaggerated partnership language, and adoption claims that may or may not be durable.

A loose research habit cannot handle that well. Full due diligence forces you to examine the project from several angles at once instead of letting one attractive feature dominate the whole conclusion.


How This Checklist Fits Into The FA Hub Course

This lesson sits after Lesson 11 and before Lesson 13 for a reason. Lessons 1 to 11 taught the major evidence layers that make up project-level Fundamental Analysis. Lesson 12 now turns those separate lessons into one capstone worksheet.

Lesson 1 gave the learner the first-pass beginner framework. Lessons 2 to 11 taught the main research domains. Lesson 12 turns those domains into one full scoring method. Lesson 13 then applies that method through case-study practice.

So this lesson does not repeat the beginner framework and it does not become a full case-study lab. Its role is methodological. It gives the learner the exact TMU worksheet for turning research into a final due-diligence judgement.


The TMU Crypto Due Diligence Worksheet

The TMU worksheet uses 10 fixed categories. Each category is scored from 0 to 5. Each category also receives an evidence flag. After that, the learner totals the score, checks the outcome band, applies override rules, and records uncertainty before writing the final conclusion.

TMU Due-Diligence Worksheet Categories
Category What It Tests Course Link
1 Project Purpose And Problem Whether the project solves a real problem clearly enough to justify serious analysis. Lesson 1
2 Market And Category Fit Whether the project makes sense inside a real market category. Lessons 1 And 11
3 Market Capitalisation And Valuation Whether valuation size fits maturity, quality, and evidence. Lesson 2
4 Tokenomics And Value Capture Whether token design is transparent, investable, and linked to value creation. Lesson 3
5 Documentation And Roadmap Quality Whether docs and roadmap materials improve judgement rather than obscure it. Lesson 4
6 Team And Execution Credibility Whether the people behind the project look capable of delivering. Lesson 5
7 Partnerships And External Credibility Whether outside support is real, relevant, and verified. Lesson 6
8 On-Chain Activity, Security, Usage Evidence Whether credible public evidence supports the project case. Lesson 7
9 Adoption, Community, Development Activity Whether the project looks alive, maintained, and capable of sustaining real user interest. Lessons 8 And 10
10 Regulation, Competition, Risk Profile Whether wider structural risks materially weaken the research case. Lessons 9 And 11

The worksheet is designed to reduce loose narrative judgement, create more consistent comparison across projects, and force visible treatment of weak evidence and unresolved uncertainty.


The 10 Core Scoring Categories

Each category asks a different question. Together, they force the learner to look at the project from several angles instead of hiding behind one strong story.

Project purpose and problem asks whether the project is solving a real problem clearly enough to justify serious analysis.

Market and category fit asks whether the project makes sense inside a real market category and has believable market context.

Market capitalisation and valuation asks whether valuation size makes sense relative to maturity, quality, and evidence.

Tokenomics and value capture asks whether the token design is transparent, investable, and linked to value creation.

Documentation and roadmap quality asks whether whitepapers, docs, and roadmap materials improve judgement rather than obscure it.

Team and execution credibility asks whether the people behind the project look capable of delivering.

Partnerships, investors, and external credibility asks whether outside support is real, relevant, and verified.

On-chain activity, security, and usage evidence asks whether there are credible signs of real usage, technical life, and security awareness.

Adoption, community, and development activity asks whether the project looks alive, maintained, and capable of sustaining real user interest.

Regulation, competition, and risk profile asks whether wider structural risks materially weaken the research case.


How To Score Each Category From 0 To 5

Use this scale exactly. Do not score the story. Score the evidence supporting the story.

0 To 5 Scoring Scale
Score Meaning Research Treatment
0 No usable evidence, severe weakness, or evidence that undermines the category. Hard weakness unless later evidence changes.
1 Very weak evidence, major gaps, or serious unresolved concerns. Strong caution.
2 Weak or partial evidence with important uncertainty. Not enough for confidence.
3 Adequate evidence, but with clear limitations or unresolved questions. Usable but not strong.
4 Strong evidence with normal uncertainty. Positive category signal.
5 Very strong, well-supported evidence with clear relevance and limited unresolved concern. Rare score. Use carefully.
Scoring discipline: If 5 becomes common, the worksheet is being used too generously.

How To Mark Evidence Quality

Every category score must carry one evidence flag. A number without an evidence flag is too easy to overtrust.

Evidence Quality Flags
Flag Meaning How To Use It
Verified The score is supported by clear and credible evidence. Trust more than the same score with weaker evidence.
Partial Some evidence exists, but it is limited in scope, depth, or reliability. Reduce confidence if the missing detail could change the verdict.
Missing The evidence needed for a confident score is not really there. Do not reward the category for absence of proof.
Contradictory The evidence conflicts in a material way. Reduce trust until the conflict is resolved.
Not Applicable The category is not meaningfully relevant in the normal way. Use sparingly and explain clearly.
Important distinction: A 4 with Partial evidence should be trusted less than a 3 with Verified evidence if the missing information could materially change the verdict.

Outcome Bands: Pass, Caution, Watchlist, Or Fail

After scoring all 10 categories, total the result out of 50. The outcome bands give the learner a disciplined starting point before override rules and uncertainty handling are applied.

Outcome Bands
Total Score Band Meaning
40 To 50 Pass The project survives full due diligence well enough to be treated as fundamentally credible, subject to override rules and uncertainty handling.
30 To 39 Caution The project has enough quality to stay alive as a serious candidate, but unresolved issues still matter.
22 To 29 Watchlist The project is not strong enough yet, but may deserve monitoring if evidence improves.
0 To 21 Fail Too many categories are weak, too much evidence is missing, or the risk profile is too compromised.

These bands are not buy or sell signals. They are due-diligence outcome labels that help discipline the research process.


Override Rules That Can Cap The Final Result

Some weaknesses are too important to let the raw score hide them. Apply these override rules before trusting the total.

1
Hidden Or Unclear Token Supply

The project cannot receive a Pass until supply clarity improves.

2
Major Unverified Partnership Claims

Cap the partnerships and external credibility category at 2 until the claim is verified.

3
No Clear Token Value Capture

Cap the tokenomics and value-capture category at 2 until the token case improves.

4
Security Exploit With Weak Response

The overall outcome should not exceed Caution, depending on severity.

5
Central Regulatory Or Market-Access Risk

The overall outcome should not exceed Caution until the access risk is clarified.

6
Team Credibility Failure

The overall outcome may be capped at Watchlist or Fail depending on severity.

Other caps may apply when a product is strong but the token link is weak, evidence is contradictory, or several moderate warnings cluster together. The override rules exist to stop the worksheet becoming false precision.


How To Record Uncertainty Before Reaching A Conclusion

Uncertainty handling is not optional. It is part of the method.

Before writing the final conclusion, record what is verified, what is partial, what is missing, what is contradictory, what is not applicable, what would change the score, what would cap the outcome, and what should be re-checked later.

This step forces the learner to show where the case is real and where it is provisional. Uncertainty should reduce confidence, not quietly inflate optimism. If an important category is mostly partial or contradictory, say so directly.


A Compact Worked Demonstration

Here is one compact realistic-composite example showing how the method works without turning this lesson into a full case-study lab.

Project: Northstar Compute.

Category: Mid-stage infrastructure token with strong technical materials, believable developer activity, unresolved token value-capture questions, and one major partnership claim not yet verified.

Northstar Compute Sample Scoring
Category Score And Flag Research Note
Project Purpose And Problem 4, Verified Clear problem and coherent product logic.
Tokenomics And Value Capture 3, Partial The token has a plausible role, but the value-capture link is not convincing enough yet.
Partnerships And External Credibility 3, Partial One major outside-support claim looks impressive, but the category is capped at 2 until verified.
Adoption, Community, Development Activity 4, Verified Credible development continuity and healthy technical community signals.

Strength: strong purpose clarity and credible development activity.

Concern: partnership credibility is overstated until verified.

Uncertainty: token value capture is still only partially evidenced.

Even with several strong features, the project should not be treated as a high-conviction Pass candidate yet because partnership verification is incomplete and token value capture remains only partially supported.


Common Due Diligence Mistakes To Avoid

The most common due-diligence mistakes are not about missing one detail. They are about breaking the method.

1
Scoring The Narrative Instead Of The Evidence

A strong story should not receive a strong score unless evidence supports it.

2
Treating A Trending Token As Proof Of Quality

Price attention can distort judgement if the underlying evidence is weak.

3
Letting One Strong Category Dominate The Worksheet

A strong team, metric, or product claim cannot hide weak tokenomics, missing supply clarity, or poor external verification.

4
Giving High Scores To Missing Information

Missing evidence should reduce confidence. It should not be filled with optimism.

5
Ignoring Override Rules

Some weaknesses cap the final result even when the raw total appears strong.

A good worksheet should make judgement more disciplined, not more excited.


Practical Crypto Due Diligence Checklist

Use this compact operational sequence each time you assess a project.

1
Fill In All 10 Categories

Do not skip weak areas because another area looks strong.

2
Score Each Category From 0 To 5

Score evidence quality, not the attractiveness of the story.

3
Assign One Evidence Flag To Each Category

Use Verified, Partial, Missing, Contradictory, or Not Applicable.

4
Identify The Strongest And Weakest Categories

Record what actually supports the case and what still weakens it.

5
Check Override Rules

Apply caps for supply opacity, weak token value capture, unverified partnerships, central access risk, team failure, or unresolved contradictions.

6
Record Uncertainty

State what is verified, partial, missing, contradictory, what would change the score, and what must be re-checked later.

7
Total The Score And Apply The Outcome Band

Use Pass, Caution, Watchlist, or Fail only after caps and uncertainty have been considered.

8
Write A Short Final Verdict

State the result in clear terms, including the main strength, main weakness, evidence quality, and what would change the view.

This is the practical worksheet flow. It should be repeatable across projects, not improvised from scratch each time.


How This Prepares You For Case Study Practice

This lesson prepares the learner for Lesson 13 by turning all prior FA layers into one working judgement system.

The learner should now be able to score a project across all 10 categories, mark evidence quality properly, spot when the raw total is misleading, apply capping rules without emotional bias, separate score from confidence, and write a short due-diligence conclusion.

Lesson 13 will apply this system in case-style practice. The purpose of Lesson 12 is to lock the worksheet itself so the learner enters the lab with a clear method rather than loose impressions.


Mini FAQs

Because the capstone lesson is meant to create a repeatable method. If the categories change every time, project comparison becomes too loose and too narrative-driven.
Yes. That is what the Watchlist band is for. A weak current case can still become more interesting later if evidence improves.
Yes. Market-cap size on its own is not the point. The question is whether valuation size makes sense relative to evidence, stage, and quality.
No. Missing or contradictory evidence should reduce confidence and, where relevant, lower the category score.
Because some risks matter more than the arithmetic. Override rules stop the worksheet from rewarding projects that look numerically acceptable but are structurally weak.
No. It is a due-diligence discipline tool. It helps you judge project quality more clearly, not replace portfolio construction or risk management.

If this changed how you score projects, evidence quality, and final FA verdicts, the weekly member update helps turn that discipline into a repeatable market process. Alpha Insider members get the real-time framework behind market quality, rotation, and signal trust every week across KAIROS timing, on-chain data, and macro signals. Explore membership here:

Explore membership