Question about probability and statistics within a simulated card pack opening scenario.

Sleepy5

New member
Joined
Jun 5, 2021
Messages
1
Hello,

I hope someone can help me with the questions I have, as I have been unable to find any way of answering them adequately. Please try to take into account that my knowledge and understanding of pretty much every field of mathematics is pretty basic or non existant...

Here's the deal; I wanted to understand how probabilities worked when opening card packs. To this purpose I'm imagining a situation in which these card packs have well defined and known odds of containing rare cards, and that the quantity of card packs available is infinite and limited only to the resources it costs you to buy these packs.

Let's start with packs that have 10% chance of containing a rare card, and let's say each pack contains 1 card.

You open 10 packs, and find only common cards, well better luck next time you figure, the odds favor you. You open 10 more, still no rare, damn, well then for sure the next few will have one. You open 10 more still, and once more, unlucky, no rare card. What are the chances! And now you're out of currency, which is too bad because you figure your luck would probably have turned very soon.

Is that or is that not a myth? For sure if you were to keep opening card packs, you would eventually get rares and eventually your ratio of rare to common cards obtained would end up reflecting the probability of obtaining a rare card that the packs were designed with. After thousands or tens of thousands of tries, that ratio would even stabilize and stay within short and well defined range... So how is that possible if each and every card you open is fully distinct from each other? If opening one pack doesn't affect what you get in the next how can such a thing be? And if they truly are distinct from each other, then whether you are opening one pack or another doesn't matter.

What if the rares in one pack are worth more than the other and the probality of rares in each are still the same, but the resources they cost to open are distinct from each other. One costs gold pieces the other silver pieces, and you are limited in the quantity you have of each. Say they cost 1 gold and 1 silver respectively, and you have 1000 silver but only 100 golds. Could you not open packs until the range of rares to commons obtained stabilize, and then start calculating when the optimal moment is to switch to golds depending on what your current ratio is to maximize the number of times the rares appear in gold packs? And if not, why not, exactly, if each card pack is already independant from the others, whether they are gold or silver?

I know you can't use probabilities to predict when a chance based outcome will happen, but can you not predict the probability of each outcome as a part of a total outcome and rely on that to make decisions?

If so, how do you manage that with packs that have differing probabilities. Adding let us say, bronze packs to the mix that cost 1 bronze to open, of which you have 100, but that have a 30% chance to obtain a rare, when your goal remains unchanged; to obtain rares within the gold packs. Is there a way to put all this in equations? And how should you keep track of this all effectively?

A quick rundown of the final scenario as a whole with some changes and additions:

3 types of packs, quantity of each infinite :
-Platinum packs ; cost 1 platinum, 5% chance of rare --> very high value rares.
-Gold packs ; cost 1 gold, 10% chance of rare --> High value rares.
-Silver packs ; cost 1 silver, 10% chance of rare --> not desired
-Bronze packs : cost 1 bronze, 30% chance of rare --> not desired
-Iron packs : cost 1 iron, 7% chance or rare --> not desired

Resources available :

5 Platinum, 25 Gold, 100 Silver, 20 bronze, 150 Iron.

Goal : Use up all resources, maximize Gold and Platinum type rare cards obtained.


Thanks for reading through this, I hope you can help provide me with some answers to this quandary. Enjoy the weekend.
 
I understand what you want to say, but I think that the answer to this if pretty simple. If we denote p(n) as the number of times we got a rare card (an event that we’re interested in happened), and n as the number of times we opened a card paci (number of times we conducted the experiment), then, we define the probability P as [MATH]\lim _{n -> \infty } \frac{p(n)}{n}[/MATH]. That basically means that you can conclude probability of an event happening, if you conducted the experiment an infinite amount of times, which is practically impossble.
But as you say, if you conduct the same experiment a 100000 times and compute the fraction p(100000)/100000, you will probably get something that is close to the probability. But that doesn’t mean that the probabilities will “stabilise” as you say, it is still possible to draw 100000 common cards right after that, because the events that we’re talking about are indepentent. That’s why your model wouldn’t work. We can use the information you gave us to try and optimize the result, but there is no “optimal moment” to change the betting strategy, because the events in place are independent. I mean think about this, if that were true, then people would be doing this kind of thing in casinos all the time! I hope I understood you and gave a satisfactory answer.
 
Suppose you tossed a fair coin 100 times and it came up heads each time. Remember that the coin is fair.
What do you think the probability of the next toss being head would be? How about a tail?

Just because you lose(win) many times in a row does not influence the outcome of the next game (in a game where you can not control the outcome)
 
Suppose you tossed a fair coin 100 times and it came up heads each time. Remember that the coin is fair.
What do you think the probability of the next toss being head would be? How about a tail?

Just because you lose(win) many times in a row does not influence the outcome of the next game (in a game where you can not control the outcome)

Of course you are correct for a coin that is known to be fair. But you could view this problem another way. If I flipped a coin 100 times and got all Heads, my immediate thought would be that the coin is not fair and that it is highly likely the next toss will be Heads, even given that the tosses are independent. In fact, you could consider that as a hypothesis test where the null hypothesis is [MATH]p(H) = \frac 1 2[/MATH] (that the coin is fair). Such an outcome would reject the null hypothesis with a high degree of confidence.
 
Top