Wikipedia:Reference desk/Mathematics
Welcome to the mathematics reference desk.

Choose a topic:
See also:

Contents
November 11[edit]
Inverse Functions[edit]
Let f be a function from R to R defined by f(x) = x^2. Find f^–1({x  0 < x < 1}).
So I found the inverse of f(x): f^1(x) = √x
It seems that the solution I found ( f^1(x) = √x ) is valid for x = 0 and x = ALL positive real numbers (x ≥ 0) because the domain and codomain are the real numbers. But not all positive real numbers are greater than 0 and less than 1. So what exactly does "find f^–1({x  0 < x < 1})" mean?
ThunderBuggy (talk) 17:28, 11 November 2018 (UTC)
 is just a roundabout way of using setbuilder notation to denote the open interval In this case, f doesn't have an inverse because it's not injective. But even if it did, the notation used is most likely asking for the preimage of the given interval under f. –Deacon Vorbis (carbon • videos) 17:38, 11 November 2018 (UTC)
Who needs big data?[edit]
When you already have decent sampling for your inferences. Why would more data alter the conclusion significantly?Doroletho (talk) 20:12, 11 November 2018 (UTC)
 If you haven’t already seen it, the article Sample size determination may be helpful. A larger sample size generally gives a narrower confidence interval, although beyond a certain point this effect gets small. A larger sample size allows the attainment of higher statistical power to reject the null hypothesis in favor of a specific alternative hypothesis. If obtaining a larger sample size can be done costlessly, a larger sample size is always better. But with costs of data collection, beyond some sample size the gains will no longer exceed the costs on the margin. Loraof (talk) 20:55, 11 November 2018 (UTC)
 That's exactly the point: why analyze 1,000,000 samples, when the additional value after, say, 1,000 random samples would be minimal and falling? Someone analyzing megas or gigas might obtain an equivalent result as the guy analyzing teras or even pentas. I'm not just asking why more data makes sense, but why analyzing a gigantic amount of data makes sense. Doroletho (talk) 22:26, 11 November 2018 (UTC)
 I don't know whether you have a specific example in mind. Big data is used for many purposes, e.g. to find samples which satisfy certain requirements. If you for example want to compare the number of births on December 24 and December 25 then your data probably starts with all birthdays of the year. If you want to analyze the health effects of drinking wine then you may want to compare samples which are similar in many other ways because wine drinkers and others may have different backgrounds and lifestyles. If you want to analyze what people search in Google then there are millions of different searches. PrimeHunter (talk) 23:33, 11 November 2018 (UTC)
 That's exactly the point: why analyze 1,000,000 samples, when the additional value after, say, 1,000 random samples would be minimal and falling? Someone analyzing megas or gigas might obtain an equivalent result as the guy analyzing teras or even pentas. I'm not just asking why more data makes sense, but why analyzing a gigantic amount of data makes sense. Doroletho (talk) 22:26, 11 November 2018 (UTC)
 Let us take the example of political polling. If you want to know which candidate will win the next election, it is indeed sufficient to ask a couple hundred of people among millions of voters to have a good idea of the vote's repartition, assuming you managed to avoid sampling bias. That is because you are only interested in one variable (vote choice).
 However, if you want to predict how a single person will vote, you will involve more variables. Singlevariable correlations are still easy: again, a couple hundred of samples will give you sufficient support to say that older people prefer Mrs. Smith and younger people prefer Mr. Doe by a certain margin.
 What becomes big data (in the current meaning of the term) is when you want to go further and correlate across many variables: for instance, maybe old people and women and doglovers prefer Mrs. Smith, but old dogloving women actually prefer Mr. Doe. If you have a whole lot of variables to take into account, small sample sizes might cause overfitting: if you have only one old dogloving woman in your sample, it might be preposterous to declare that old dogloving women prefer Mr. Doe by a wide margin. In some applications, you even have more variables than observations, so you are bound to overfit if you do not set a limit of some sort.
 You might also be interested in our article on cluster analysis. Tigraan^{Click here to contact me} 10:45, 12 November 2018 (UTC)
 Another factor that should be mentioned is that there's not much reason not to use all the data you've got whether it's likely to change the result or not; with modern computing power it's just as easy to compute an average over a million samples as over a thousand. The real bottleneck at this point is usually data collection, and normally big data applications occur when this too can be automated. RDBury (talk) 16:09, 12 November 2018 (UTC)
November 16[edit]
Limit of real root of a polynomial[edit]
Let . It's easy to see that there is always 1 real root, call it . Does have a closed form? Is there anything interesting to say about it? Numerically it is approximately 0.658626754300164. 98.190.129.147 (talk) 17:43, 16 November 2018 (UTC)
 Apparently the number is mentioned in this article. It's got a pay wall though. RDBury (talk) 22:03, 16 November 2018 (UTC)
 That article credits the sum to [1]. Unfortunately neither says much more about the real zero than you've figured out already (one has a proof of convergence), they're both mostly about complex zeros of . 78.0.230.255 (talk) 00:50, 17 November 2018 (UTC)
November 17[edit]
nth composite number divisible by n[edit]
Are 1, 2, 5, 6, and 7 the only positive integers n for which the nth composite number is divisible by n? GeoffreyT2000 (talk) 16:14, 17 November 2018 (UTC)
 Looks like it. There aren't any other small ones, then the number of composite is more than half of the upper number so it can't divide it. Bubba73 ^{You talkin' to me?} 16:58, 17 November 2018 (UTC)
 Here's a proof. Please find a flaw if possible:
 The nth composite number is bounded below by n.
 Because all even numbers greater than 2 and some odd numbers are composite, the presence of at least 2 composite odd numbers proves that the nth composite number will always be less than 2n for all n where 2n > the second odd composite number (15.)
 Therefore, for all n >= 8, the nth composite number will be between n and 2n, and dividing it by n will give a number between 1 and 2 and thus it can't be an integer.
Can anyone find a flaw here?? Georgia guy (talk) 17:51, 17 November 2018 (UTC)