Please help me getting started

Ghost3k

New member
Joined
Nov 4, 2011
Messages
42
Hi, I was hoping someone could help me get started on this problem.



Show that the random variable

. . . . .\(\displaystyle X\, =\, \begin{cases}a&\mbox{with probability }\, p\\b&\mbox{with probability }\, 1\, -\, p \end{cases}\)

where \(\displaystyle \, a,\, b\, \in\, \mathbb{R}\, \) and \(\displaystyle \, p\, \in\, (0,\, 1),\, \) has the properties

. . . . .\(\displaystyle \mbox{E}X\,=\, b\, +\, (a\, -\, b)p\, \). . .and . . .\(\displaystyle \mbox{Var }\, X\, =\, (a\, -\, b)^2\, p\, (1\, -\, p)\)

Hint: Write X as a linear transformation of a Bernoulli random variable.




The hint says to write X as a linear transformation of a Bernoulli random variable. I'm not really sure how that would look. I know that a linear transformation is when you multiply and/or add a constant to a random variable. So would that be like Y = mX + b? I know that the expected value of a Bernoulli random variable is p and the variance is p(1-p). I believe I have the right idea, I just don't know how to apply it. Any help is appreciated!

Edit: I also know that the variance of the new random variable would be E[Y] = m*(E[X]) + b and the variance would be Var[Y] = m^2*Var(X)

Edit2: So to prove it would I just have to make m = a-b? Because if I do and substitute the values for the equations I have above I would get the same answers as on the problem. Not really sure why one would do that though (changing m with a-b).
 
Last edited by a moderator:
Just a forewarning: I'm not the greatest at statistics and I don't 100% grasp what, exactly, a Bernoulli random variable is. With that said, I think I have a solution to the problem. You're asked to prove two separate parts: That the expected value of X is the expression they give and that the variance of X is the expression they give. I'll start with the expected value part.

Let's think about what expected value means. It's the weighted average value of X. To figure out how to find the expected value, try an easier problem. Suppose that 40% of the time, the value of Y is 0, and the other 60% of the time, the value is 1. So how would you calculate the expected value of Y? Now can you use that same principle to solve the actual problem? Once you do, it's just a matter of monkeying around with some algebra to get it to look like the form they give.

Okay, and what is the variance? Well, it's just the sum of the squared differences from the expected value. And you just calculated the expected value, so that's handy. Returning to our simpler example, can you calculate the variance of Y? Remember that variance takes the form of \(\displaystyle \sigma ^2=\left(some\:expression\right)\) so you'll need to take the square root to get the variance.

Hopefully that's enough of a hint to get your started on your way. If not, then when you reply back, please include all of your steps and work (even if you know it's wrong). Good luck.
 
Hi, I was hoping someone could help me get started on this problem.



Show that the random variable

. . . . .\(\displaystyle X\, =\, \begin{cases}a&\mbox{with probability }\, p\\b&\mbox{with probability }\, 1\, -\, p \end{cases}\)

where \(\displaystyle \, a,\, b\, \in\, \mathbb{R}\, \) and \(\displaystyle \, p\, \in\, (0,\, 1),\, \) has the properties

. . . . .\(\displaystyle \mbox{E}X\,=\, b\, +\, (a\, -\, b)p\, \). . .and . . .\(\displaystyle \mbox{Var }\, X\, =\, (a\, -\, b)^2\, p\, (1\, -\, p)\)

Hint: Write X as a linear transformation of a Bernoulli random variable.




The hint says to write X as a linear transformation of a Bernoulli random variable. I'm not really sure how that would look. I know that a linear transformation is when you multiply and/or add a constant to a random variable. So would that be like Y = mX + b? I know that the expected value of a Bernoulli random variable is p and the variance is p(1-p). I believe I have the right idea, I just don't know how to apply it. Any help is appreciated!

Edit: I also know that the variance of the new random variable would be E[Y] = m*(E[X]) + b and the variance would be Var[Y] = m^2*Var(X)

Edit2: So to prove it would I just have to make m = a-b? Because if I do and substitute the values for the equations I have above I would get the same answers as on the problem. Not really sure why one would do that though (changing m with a-b).

You have (almost) started to do the proper thing. That is, write X as a linear transformation to get

X = m Y + c

[b is already used in the problem] where Y is a Bernoulli distribution with mean p. Now you have to choose m and c so that they meet the conditions

(1) Y=1 (probability p) implies X = a and Y=0 (probability 1-p) implies X = b

Note that this is not the only way we could work this. We could also choose Y to be a Bernoulli distribution with mean q = 1-p and find m and c so that

(2) Y=0 (probability p=1-q) implies X = a and Y=1 (probability q =1-p) implies X = b
 
Last edited by a moderator:
You have (almost) started to do the proper thing. That is, write X as a linear transformation to get
X = m Y + c
[b is already used in the problem] where Y is a Bernoulli distribution with mean p. Now you have to choose m and c so that they meet the conditions
(1) Y=1 (probability p) implies X = a and Y=0 (probability 1-p) implies X = b

Note that this is not the only way we could work this. We could also choose Y to be a Bernoulli distribution with mean q = 1-p and find m and c so that
(2) Y=0 (probability p=1-q) implies X = a and Y=1 (probability q =1-p) implies X = b

Hm okay, that makes sense about the Y and c and the possibility of looking at it in two ways. But I'm not sure what you mean by choose m and c to meet the conditions. Assuming I use the first way, do I not just need to take the expected value of X which would be E[X] = m*E[Y] + c and the variance Var(X) = m^2*Var[Y]? Knowing this, and that the mean of Y is p, would E[X] = m*p + c and likewise the Variance Var(X) = m^2 *(p(1-p))?

Edit: Or do I have to do it as follows. To keep the conditions that Y = 1 with prob p means X = a and Y = 0 with prob 1-p means X = b, I make m = a-b and c = b? Because if I do that X = (a-b)(1) + b = a and X = (a-b)(0) + b = b
 
Last edited:
Hm okay, that makes sense about the Y and c and the possibility of looking at it in two ways. But I'm not sure what you mean by choose m and c to meet the conditions. Assuming I use the first way, do I not just need to take the expected value of X which would be E[X] = m*E[Y] + c and the variance Var(X) = m^2*Var[Y]? Knowing this, and that the mean of Y is p, would E[X] = m*p + c and likewise the Variance Var(X) = m^2 *(p(1-p))?

Edit: Or do I have to do it as follows. To keep the conditions that Y = 1 with prob p means X = a and Y = 0 with prob 1-p means X = b, I make m = a-b and c = b? Because if I do that X = (a-b)(1) + b = a and X = (a-b)(0) + b = b

X = m Y + c
Y=1 (probability p) implies X = a
a = m * 1 + c
and Y=0 (probability 1-p) implies X = b
b = m * 0 + c
So c=b and m=a-c=a-b or
X = (a-b) Y + b.

But yes, you can do it the other way also but then you need to check that X is the proper thing with the proper probability..
 
Last edited:
X = m Y + c
Y=1 (probability p) implies X = a
a = m * 1 + c
and Y=0 (probability 1-p) implies X = b
b = m * 0 + c
So c=b and m=a-c=a-b or
X = (a-b) Y + b.

But yes, you can do it the other way also but then you need to check that X is the proper thing with the proper probability..
Thank you. You were incredibly helpful.
 
Top