So you think you know AI?
To wrap your head around null Hypothesis you have to kind of think in a back-a** way. Almost as thinking what is correct and then reverting back on what you think is correct.
So you create a hypothesis and normally it will be proven wright or wrong. Here we create a Hypothesis which under ideal circulstances should always be right, but we have to prove it wrong.
Understanding the Null Hypothesis: The Fair Coin Example
The null hypothesis is a fundamental concept in statistics. In simple terms, it’s a statement that there’s no effect or no difference—that what we observe is due to pure chance, not some underlying bias or special cause.
Let’s take a classic example: flipping a fair coin.
If a coin is truly fair, we expect that over many flips, heads and tails will come up with equal probability—50% each. The null hypothesis here would be:
"The coin is fair, so the probability of heads equals the probability of tails."
Now, under ideal circumstances, if you flip the coin twice, you might imagine getting exactly one head and one tail. But in real life, that doesn’t always happen. In fact, if you flip a coin many times, you’ll notice that the numbers of heads and tails will often be unequal.
Why? Because randomness is messy. Even with a fair coin, streaks and imbalances happen naturally. Getting 60 heads and 40 tails in 100 flips doesn’t prove the coin is unfair—it could just be chance. That’s exactly why the null hypothesis exists: it’s our default position, and we only reject it when the evidence against it is strong enough.
In practice, statistical tests help us decide whether the differences we see—like that 60–40 split—are likely to be random noise, or so unlikely under the null hypothesis that we start suspecting something else is going on.
But this isn’t just about coins.
If I were running an experiment for something much more serious—say, testing a new medicine—I’d want to be very careful. I dont want to be chased by people because in all reality, the effectiveness of a medicine can depend on many external factors which vary from person to person. Placebo effect is real.
Another way to think is in an alalogy of a trial. You will again see how this is a back-a** way of thinking. Rejecting H₀ is like trying to convict in a trial — You don’t prove the person did it; you just decide the evidence makes the “not guilty” assumption unreasonable. Double negatives suck.
This is why we set up our null hypothesis and then look for enough evidence to reject it. We don’t accept it—we just keep it until something stronger comes along to topple it.
But here’s the big question: when do we decide the evidence is strong enough to reject the null hypothesis?
That’s where the p-value enters the scene. We also need to understand the term Test Staitistic.
A Test Statistic is a single number calculated from sample data that is used to decide whether to reject the null hypothesis in a statistical test.
P-value For the purpose of AI all we need to know at this stage is that:
The p-value is a number that tells us how likely our results are if the null hypothesis is true.
It answers this question:
"If the null hypothesis were correct, what is the chance of getting results like mine (or more extreme) just by random chance?"
A small p-value means the results are unlikely under the null hypothesis.
A large p-value means the results are fairly likely under the null hypothesis.
Before starting, we pick a cut-off value called the significance level (often 0.05).
If p-value ≤ 0.05 → we reject the null hypothesis. This means we have proven that the observed results are very unlikely to have happened by chance if the null hypothesis were true.
If p-value > 0.05 → we do not reject the null hypothesis.
If p-value > 0.05 → we do not reject the null hypothesis. This means we have not found strong enough evidence to say the results are unlikely under the null hypothesis.
The p-value does not tell us the probability that the null hypothesis is true. It only tells us how well the results match what we’d expect if the null hypothesis were true.
AI created the fake team. I am not Esther Bryce
Our strength lies in our individuality. Set up by Esther Bryce, the team strives to bring in the best talent in various fields, from architecture to interior design and sales.
Esther Bryce
Founder / Interior designer
Lianne Wilson
Broker
Jaden Smith
Architect
Jessica Kim
Photographer