Coursera download as pdf fails






















Number you got correct — 2. Online Test. You can help! Add a definition. Recall in chapter 4, we A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. The exponential distribution is a continuous probability distribution used to model the time we need to wait before a given event occurs.

By this, we mean the range of values that a parameter can take when we randomly pick up values from it. A coin flip has 2 outcomes, tossing dice have 6 possibilities and survey questions may have Binomial And Hyper-geometric Probability Mcqs Statistics Mcqs Here you will find Basic statistics mcqs , data, Sample, population, Measure of dispersion, Measure of central tendency, Descriptive Statistics, Inferential Statistics etc.

To compensate for that double addition, the intersection needs to be subtracted. Can you work out this math puzzle? Choices: 30 15 25 Find the probability of 3 sixes occurring. An unsimpli ed answer involving factorials, binomial coe cients, et cetera, is acceptable.

Take the quiz on your own and check your answers when you're done. Take this test to assess your knowledge of normal distribution. Extension: Normal Distributions. Probability Distributions Statistical Inference, Confidence Intervals and Hypothesis Tests - Answered by a verified Tutor We use cookies to give you the best possible experience on our website. On a test, the scores follow a normal distribution with mean 79 and standard deviation 9.

Probability and Probability Distributions. Last Updated on February 10, by Admin. Normal probability distribution is a discrete probability distribution. Find the probability for at least 4 questions correct.

Find the probability of 7 heads occurring. Probability distributions Coursera quiz Answers. Each die is numbered with the integers from 1 to 6. For ease of grading, the answers follow the brief test. Find the area between 0 and 8 in a uniform distribution that goes from 0 to As a consequence the shop doesn't have the same amount of flavours every day. Super Easy Math Test quiz will consist of 40 questions and answers. As if guessing right wasn't sweet enough, correct answers will Type in your question, add answer options, choose the settings that fit your purpose best - and you're ready to go: Bot API and Quiz Bot.

The Poisson Distribution is a probability distribution. Let be independent and identically distributed random variables with the uniform distribution on [0, 1]. Show Answer. Justin claims that if he rolls both dice, their sum can have any integer value between 2 and 12 inclusive, which represents 11 distinct possibilities.

A binomial distribution is one of the probability distribution methods. There will be no labs for this week. I am trying to learn unordered maps, but I could not update the probs map after inserting the pair. The standard deviation is the square root of Some tables incorporate single-tail probability and another table may incorporate double-tail probability. I will try my best to answer it. The probability that the number on the card taken out is an even number, is.

A marketing survey compiled data on the number of cars in households. Edulastic - All rights reserved. Multiple Choice Quiz. Multiplying these together gives 0. A binomial distribution has only two possible outcomes on each trial, results from counting successes over a series of trials, the probability of success stays the same from trial to trial and successive trials are independent. Probability distribution quiz answers From Normal probability table Order cheap rhetorical analysis essay on civil war the story of an hour kate chopin essay , top term paper writing service uk essays death and the maiden ariel dorfman , college research paper samples.

Homework tonight with answers bk 5. Pay to do professional scholarship essay on usa medical assistant job summary resume, cover letter inquiry internship women sport essays top dissertation conclusion ghostwriting services for masters pay for my popular movie review online junior piping designer resume sample , write a scientific review free essays on black history month, professional dissertation writer site gb pro abortion essay free pay to get music curriculum vitae.

Pay for term papers. How to write a good thesis and intro paragraph conclusion definition essay success le resume du malade imaginaire u of t thesis submission guidelines. Awakening critical essay how to write case studies. Vocalise rachmaninov dessay Reformation research paper. Book report of crime and punishment Punctuation rules in an essay chef experience resume. Hotmath help homework geometry library free essays about family scarlet letter punishment essay. Ocdsb ca homework calendar gym business plan sample literature review employee performance private school administration resume , dissertation england.

Intitle resume oracle ascp. Speech pathology seminars. Renal diet essay. Presentation editor service ca creative essay writers site usa. Printing a resume on colored paper. Ocdsb ca homework calendar. Best report writer service for university, custom report ghostwriters sites ca women sport essays: custom book review ghostwriters sites online essays french languageCustom creative writing writers service for masters best best essay ghostwriting website ca.

Sample resume formats download essay contests in india higher education cover letter sample vectors aphorisms and ten-second essays , disable on resume display logon screen gpo agile methodology resume greatest dream life essay. Mobile beauty therapy business plan How cited to mla work write. Custom papers proofreading site for college how to write test cases software testing. Esl presentation proofreading websites for school.

Essay on hamlet tragic hero scarlet letter punishment essay. Outline for compare and contrast essay. Writing a research paper grade essay number count, career objective for electrical engineering resume. Write a verse or hadith on brotherhood a test of the black subculture of violence thesis. Outline for research paper on tattoos best essay titles.

Resume cover letter for paralegal job writing persuasive essays 4th grade. Each extra level in a multi-index represents an extra dimension of data; taking advantage of this property gives us much more flexibility in the types of data we can represent.

Methods of MultiIndex Creation The most straightforward way to construct a multiply indexed Series or DataFrame is to simply pass a list of two or more index arrays to the constructor. Explicit MultiIndex constructors For more flexibility in how the index is constructed, you can instead use the class method constructors available in the pd.

For example, as we did before, you can construct the MultiIndex from a simple list of arrays, giving the index values within each level: In[14]: pd.

MultiIndex level names Sometimes it is convenient to name the levels of the MultiIndex. You can accomplish this by passing the names argument to any of the above MultiIndex constructors, or by setting the names attribute of the index after the fact: In[18]: pop. MultiIndex for columns In a DataFrame, the rows and columns are completely symmetric, and just as the rows can have multiple levels of indices, the columns can have multiple levels as well.

This is fundamentally four-dimensional data, where the dimensions are the subject, the measurement type, the year, and the visit number. Indexing and Slicing a MultiIndex Indexing and slicing on a MultiIndex is designed to be intuitive, and it helps if you think about the indices as added dimensions. Rearranging Multi-Indices One of the keys to working with multiply indexed data is knowing how to effectively transform the data. Sorted and unsorted indices Earlier, we briefly mentioned a caveat, but we should emphasize it more here.

Many of the MultiIndex slicing operations will fail if the index is not sorted. Series np. For hierarchically indexed data, these can be passed a level parameter that controls which subset of the data the aggregate is computed on.

Panel Data Pandas has a few other fundamental data structures that we have not yet discussed, namely the pd. Panel and pd. Panel4D objects. These can be thought of, respectively, as three-dimensional and four-dimensional generalizations of the one-dimensional Series and two-dimensional DataFrame structures.

Once you are familiar with indexing and manipulation of data in a Series and DataFrame, Panel and Panel4D are relatively straightforward to use. Additionally, panel data is fundamentally a dense data representation, while multi-indexing is fundamentally a sparse data representation. As the number of dimensions increases, the dense representation can become very inefficient for the majority of real-world datasets. Combining Datasets: Concat and Append Some of the most interesting studies of data come from combining different data sources.

Series and DataFrames are built with this type of operation in mind, and Pandas includes functions and methods that make this sort of data wrangling fast and straightforward. Like np. Duplicate indices One important difference between np. While this is valid within DataFrames, the outcome is often undesirable.

Catching the repeats as an error. With this set to True, the concatenation will raise an exception if there are duplicate indices. Sometimes the index itself does not matter, and you would prefer it to simply be ignored. With this set to True, the concatenation will create a new integer index for the resulting Series: In[11]: print x ; print y ; print pd. Another alternative is to use the keys option to specify a label for the data sources; the result will be a hierarchically indexed series containing the data: In[12]: print x ; print y ; print pd.

Concatenation with joins In the simple examples we just looked at, we were mainly concatenating DataFrames with shared column names. Consider the concatenation of the following two DataFrames, which have some but not all! The append method Because direct array concatenation is so common, Series and DataFrame objects have an append method that can accomplish the same thing in fewer keystrokes. For example, rather than calling pd. It also is not a very efficient method, because it involves creation of a new index and data buffer.

Thus, if you plan to do multiple append operations, it is generally better to build a list of DataFrames and pass them all at once to the concat function. Combining Datasets: Merge and Join One essential feature offered by Pandas is its high-performance, in-memory join and merge operations. If you have ever worked with databases, you should be familiar with this type of data interaction.

The main interface for this is the pd. Relational Algebra The behavior implemented in pd. Pandas implements several of these fundamental building blocks in the pd. As we will see, these let you efficiently link data from different sources. Categories of Joins The pd. All three types of joins are accessed via an identical call to the pd. Here we will show simple examples of the three types of merges, and discuss detailed options further below.

The result of the merge is a new DataFrame that combines the information from the two inputs. Many-to-one joins Many-to-one joins are joins in which one of the two key columns contains duplicate entries. Many-to-many joins Many-to-many joins are a bit confusing conceptually, but are nevertheless well defined. If the key column in both the left and right array contains duplicates, then the result is a many-to-many merge.

This will be perhaps most clear with a concrete example. Consider the following, where we have a DataFrame showing one or more skills associated with a particular group. However, often the column names will not match so nicely, and pd.

Specifying Set Arithmetic for Joins In all the preceding examples we have glossed over one important consideration in performing a join: the type of set arithmetic used in the join. This comes up when a value appears in one key column but not the other. By default, the result contains the intersection of the two sets of inputs; this is what is known as an inner join.

We can specify this explicitly using the how keyword, which defaults to 'inner': In[14]: pd. An outer join returns a join over the union of the input columns, and fills in all missing values with NAs: In[15]: print df6 ; print df7 ; print pd. For example: In[16]: print df6 ; print df7 ; print pd. All of these options can be applied straightforwardly to any of the preceding join types. Overlapping Column Names: The suffixes Keyword Finally, you may end up in a case where your two input DataFrames have conflicting column names.

If these defaults are inappropriate, it is possible to specify a custom suffix using the suffixes keyword: In[18]: print df8 ; print df9 ; print pd. Here we will consider an example of some data about US states and their populations. In[23]: merged[merged['population']. More importantly, we see also that some of the new state entries are also null, which means that there was no corresponding entry in the abbrevs key!

We can fix these quickly by filling in appropriate entries: In[25]: merged. Now we can merge the result with the area data using a similar procedure. We can see that by far the densest region in this dataset is Washington, DC i.

We can also check the end of the list: In[33]: density. This type of messy data merging is a common task when one is trying to answer questions using real-world data sources. It gives information on planets that astronomers have discovered around other stars known as extrasolar planets or exoplanets for short.

For example, we see in the year column that although exoplanets were discovered as far back as , half of all known exoplanets were not discovered until or after. Table summarizes some other built-in Pandas aggregations.

Listing of Pandas aggregation methods Aggregation Description count Total number of items first , last First and last item mean , median Mean and median min , max Minimum and maximum std , var Standard deviation and variance mad Mean absolute deviation prod Product of all items sum Sum of all items These are all methods of DataFrame and Series objects.

To go deeper into the data, however, simple aggregates are often not enough. The next level of data summarization is the groupby operation, which allows you to quickly and efficiently compute aggregates on subsets of data. GroupBy: Split, Apply, Combine Simple aggregations can give you a flavor of your dataset, but often we would prefer to aggregate conditionally on some label or index: this is implemented in the so- called groupby operation.

Rather, the GroupBy can often do this in a single pass over the data, updating the sum, mean, count, min, or other aggregate for each group along the way. The power of the GroupBy is that it abstracts away these steps: the user need not think about how the computation is done under the hood, but rather thinks about the operation as a whole. This object is where the magic is: you can think of it as a special view of the DataFrame, which is poised to dig into the groups but does no actual computation until the aggregation is applied.

Perhaps the most important operations made available by a GroupBy are aggregate, filter, transform, and apply. Column indexing.

For example: In[14]: planets. As with the GroupBy object, no computation is done until we call some aggregate on the object: In[16]: planets. Iteration over groups. The GroupBy object supports direct iteration over the groups, returning each group as a Series or DataFrame: In[17]: for method, group in planets. Dispatch methods. Through some Python class magic, any method not explicitly implemented by the GroupBy object will be passed through and called on the groups, whether they are DataFrame or Series objects.

For example, you can use the describe method of DataFrames to perform a set of aggregations that describe each group in the data: In[18]: planets. The newest methods seem to be Transit Timing Variation and Orbital Brightness Modulation, which were not used to discover a new planet until This is just one example of the utility of dispatch methods.

Notice that they are applied to each individual group, and the results are then combined within GroupBy and returned. Aggregate, filter, transform, apply The preceding discussion focused on aggregation for the combine operation, but there are more options available.

In particular, GroupBy objects have aggregate , filter , transform , and apply methods that efficiently implement a variety of useful operations before combining the grouped data. It can take a string, a function, or a list thereof, and compute all the aggregates at once. Here is a quick example combining all these: In[20]: df.

Here because group A does not have a standard deviation greater than 4, it is dropped from the result. For such a transformation, the output is the same shape as the input. A common example is to center the data by subtracting the group-wise mean: In[23]: df. The apply method lets you apply an arbitrary function to the group results.

The function should take a DataFrame, and return either a Pandas object e. A list, array, series, or index providing the grouping keys. The key can be any series or list with a length matching that of the DataFrame. Similar to mapping, you can pass any Python function that will input the index value and output the group: In[28]: print df2 ; print df2. Further, any of the preceding key choices can be combined to group on a multi-index: In[29]: df2. We immediately gain a coarse understanding of when and how planets have been discovered over the past several decades!

A pivot table is a similar operation that is commonly seen in spreadsheets and other programs that operate on tabular data. The pivot table takes simple column- wise data as input, and groups the entries into a two-dimensional table that provides a multidimensional summarization of the data.

The difference between pivot tables and GroupBy can sometimes cause confusion; it helps me to think of pivot tables as essentially a multidimensional version of GroupBy aggregation.

That is, you split- apply-combine, but both the split and the combine happen across not a one- dimensional index, but across a two-dimensional grid. This is useful, but we might like to go one step deeper and look at survival by both sex and, say, class. In code: In[4]: titanic. First-class women survived with near certainty hi, Rose!

For example, we might be interested in looking at age as a third dimension. The aggfunc keyword controls what type of aggregation is applied, which is a mean by default. Additionally, it can be specified as a dictionary mapping a column to any of the above desired options: In[8]: titanic. This can be done via the margins keyword: In[9]: titanic. Total number of US births by year and gender With a simple pivot table and plot method, we can immediately see the annual trend in births by gender.

We must start by cleaning the data a bit, removing outliers caused by mistyped dates e. This allows us to quickly compute the weekday corresponding to each row: In[18]: create a datetime index from the year, month, day births.

Average daily births by day of week and decade Apparently births are slightly less common on weekends than on weekdays! Note that the s and s are missing because the CDC data contains only the month of birth starting in From this, we can use the plot method to plot the data Figure Vectorized String Operations One strength of Python is its relative ease in handling and manipulating string data. Pandas builds on this and provides a comprehensive set of vectorized string operations that become an essential piece of the type of munging required when one is working with read: cleaning up real-world data.

Introducing Pandas String Operations We saw in previous sections how tools like NumPy and Pandas generalize arithmetic operations so that we can easily and quickly perform the same operation on many array elements.

Here is a list of Pandas str methods that mirror Python string methods: len lower translate islower ljust upper startswith isupper rjust find endswith isnumeric center rfind isalnum isdecimal zfill index isalpha split strip rindex isdigit rsplit rstrip capitalize isspace partition lstrip swapcase istitle rpartition Notice that these have various return values.

Some, like lower , return a series of strings: In[7]: monte. With these, you can do a wide range of interesting operations. The get and slice operations, in particular, enable vectorized element access from each array. For example, we can get a slice of the first three characters of each array using str. These get and slice methods also let you access elements of arrays returned by split.

For example, to extract the last name of each entry, we can combine split and get : In[14]: monte. This is useful when your data has a column containing some sort of coded indicator. Example: Recipe Database These vectorized string operations become most useful in the process of cleaning up messy, real-world data. Our goal will be to parse the recipe data into ingredient lists, so we can quickly find a recipe based on some ingredients we have on hand.

One way we can do this is to actually construct a string representation containing all these JSON entries, and then load the whole thing with pd. Name: 0, dtype: object There is a lot of information there, but much of it is in a very messy form, as is typical of data scraped from the Web. It is data munging like this that Python really excels at.

DataFrame dict spice, recipes. Of course, building a very robust recipe recommendation system would require a lot more work! Extracting full ingredient lists from each recipe would be an important piece of the task; unfortunately, the wide variety of formats used makes this a relatively time- consuming process.

This points to the truism that in data science, cleaning and munging of real-world data often comprises the majority of the work, and Pandas provides the tools that can help you do this efficiently. Working with Time Series Pandas was developed in the context of financial modeling, so as you might expect, it contains a fairly extensive set of tools for working with dates, times, and time- indexed data.

This short section is by no means a complete guide to the time series tools available in Python or Pandas, but instead is intended as a broad overview of how you as a user should approach working with time series. We will start with a brief discussion of tools for dealing with dates and times in Python, before moving more specifically to a discussion of the tools provided by Pandas. After listing some resources that go into more depth, we will review some short examples of working with time series data in Pandas.

Dates and Times in Python The Python world has a number of available representations of dates, times, deltas, and timespans. While the time series tools provided by Pandas tend to be the most useful for data science applications, it is helpful to see their relationship to other packages used in Python. Along with the third-party dateutil module, you can use it to quickly perform a host of useful functionalities on dates and times.

A related package to be aware of is pytz, which contains tools for working with the most migraine-inducing piece of time series data: time zones. The power of datetime and dateutil lies in their flexibility and easy syntax: you can use these objects and their built-in methods to easily perform nearly any operation you might be interested in.

The datetime64 dtype encodes dates as bit integers, and thus allows arrays of dates to be represented very compactly. Because the datetime64 object is limited to bit precision, the range of encodable times is times this fundamental unit. In other words, date time64 imposes a trade-off between time resolution and maximum time span. For example, if you want a time resolution of one nanosecond, you only have enough information to encode a range of nanoseconds, or just under years.

NumPy will infer the desired unit from the input; for example, here is a day-based datetime: In[6]: np. Dates and times in Pandas: Best of both worlds Pandas builds upon all the tools just discussed to provide a Timestamp object, which combines the ease of use of datetime and dateutil with the efficient storage and vectorized interface of numpy. For example, we can use Pandas tools to repeat the demonstration from above.

Pandas Time Series: Indexing by Time Where the Pandas time series tools really become useful is when you begin to index data by timestamps. The associated index structure is DatetimeIndex. This encodes a fixed- frequency interval based on numpy.

The associated index structure is PeriodIndex. The associated index structure is TimedeltaIndex. While these class objects can be invoked directly, it is more common to use the pd. Similarly, pd. For example, here we will construct a range of hourly timestamps: In[20]: pd. Here are some monthly periods: In[21]: pd. Just as we saw the D day and H hour codes previously, we can use such codes to specify any desired frequency spacing.

Table summarizes the main codes available. Adding an S suffix to any of these marks it instead at the beginning Table On top of this, codes can be combined with numbers to specify other frequencies.

For example, for a frequency of 2 hours 30 minutes, we can combine the hour H and minute T codes as follows: In[23]: pd. Resampling, Shifting, and Windowing The ability to use dates and times as indices to intuitively organize and access data is an important piece of the Pandas time series tools. The benefits of indexed data in general automatic alignment during operations, intuitive data slicing and access, etc.

We will take a look at a few of those here, using some stock price data as an example. The primary difference between the two is that resample is fundamentally a data aggregation, while asfreq is fundamentally a data selection. Here we will resample the data at the end of business year Figure : In[29]: goog.

For up-sampling, resample and asfreq are largely equivalent, though resample has many more options available. In this case, the default for both methods is to leave the up-sampled points empty—that is, filled with NA values. Just as with the pd.

Here, we will resample the business day data at a daily frequency i. Comparison between forward-fill and back-fill interpolation The top panel is the default: non-business days are left as NA values and do not appear on the plot.

The bottom panel shows the differences between two strategies for filling the gaps: forward-filling and backward-filling. Time-shifts Another common time series—specific operation is shifting of data in time. Pandas has two closely related methods for computing this: shift and tshift. In short, the difference between them is that shift shifts the data, while tshift shifts the index.

In both cases, the shift is specified in multiples of the frequency. Timedelta , 'D' ax[0]. Comparison between shift and tshift We see here that shift shifts the data by days, pushing some of it off the end of the graph and leaving NA values at the other end , while tshift shifts the index values by days.

A common context for this type of shift is computing differences over time. Return on investment to present day for Google stock This helps us to see the overall trend in Google stock: thus far, the most profitable times to invest in Google have been unsurprisingly, in retrospect shortly after its IPO, and in the middle of the recession.

Rolling windows Rolling statistics are a third type of time series—specific operation implemented by Pandas. This rolling view makes available a number of aggregation operations by default.

Rolling statistics on Google stock prices As with groupby operations, the aggregate and apply methods can be used for custom rolling computations. Although it is now a few years old, it is an invaluable resource on the use of Pandas. As always, you can also use the IPython help functionality to explore and try further options available to the functions and methods discussed here. I find this often is the best way to learn a new Python tool.

We can gain more insight by resampling the data to a coarser grid. The following code visualized in Figure specifies both the width of the window we chose 50 days and the width of the Gaussian within the window we chose 10 days : In[42]: daily.

Gaussian smoothed weekly bicycle counts Digging into the data While the smoothed data views in Figure are useful to get an idea of the general trend in the data, they hide much of the interesting structure. For example, we might want to look at the average traffic as a function of the time of day.

This is likely evidence of a strong component of commuter traffic crossing the bridge. This is further evidenced by the differences between the western sidewalk generally used going toward downtown Seattle , which peaks more strongly in the morning, and the eastern sidewalk generally used going away from downtown Seattle , which peaks more strongly in the evening. Average hourly bicycle counts We also might be curious about how things change based on the day of the week.

Average daily bicycle counts This shows a strong distinction between weekday and weekend totals, with around twice as many average riders crossing the bridge on Monday through Friday than on Saturday and Sunday. Average hourly bicycle counts by weekday and weekend The result is very interesting: we see a bimodal commute pattern during the work week, and a unimodal recreational pattern during the weekends. As of version 0. These are the eval and query functions, which rely on the Numexpr package.

In this notebook we will walk through their use and give some rules of thumb about when you might think about using them. The Numexpr library gives you the ability to compute this type of compound expression element by element, without the need to allocate full intermediate arrays. The Pandas eval and query tools that we will discuss here are conceptually similar, and depend on the Numexpr package.

Other operations, such as function calls, conditional statements, loops, and other more involved constructs, are currently not implemented in pd. The benefit of the eval method is that columns can be referred to by name. Assignment in DataFrame. Notice that this character is only supported by the DataFrame. It cannot be expressed using the Data Frame.

Memory use is the most predictable aspect. You can check the approximate size of your array in bytes using this: In[28]: df.

Still, much has been omitted from our discussion. Python for Data Analysis Written by Wes McKinney the original creator of Pandas , this book contains much more detail on the package than we had room for in this chapter. The book also has many entertaining examples of applying Pandas to gain insight from real-world datasets. Pandas on Stack Overflow Pandas has so many users that any question you have has likely been asked and answered on Stack Overflow.

Using Pandas is a case where some Google-Fu is your best friend. The PyCon tutorials in particular tend to be given by very well-vetted presenters. Matplotlib is a multiplatform data visualization library built on NumPy arrays, and designed to work with the broader SciPy stack. John took this as a cue to set out on his own, and the Matplotlib package was born, with version 0. Matplotlib supports dozens of backends and output types, which means you can count on it to work regardless of which operating system you are using or which output format you wish.

This cross-platform, everything-to-everyone approach has been one of the great strengths of Matplotlib. In recent years, however, the interface and style of Matplotlib have begun to show their age. For this reason, I believe that Matplotlib itself will remain a vital piece of the data visualization stack, even if new tools mean the community gradually moves away from using the Matplotlib API directly.

General Matplotlib Tips Before we dive into the details of creating visualizations with Matplotlib, there are a few useful things you should know about using the package. Importing matplotlib Just as we use the np shorthand for NumPy and the pd shorthand for Pandas, we will use some standard shorthands for Matplotlib imports: In[1]: import matplotlib as mpl import matplotlib. Setting Styles We will use the plt. Restricted to Verified Teachers only. What is Statistics? Videos about applying Statistics Explore Statistics applications through these short videos!

Bringing life to global health statistics A longer tour led by Hans Rosling. Building a better NBA team through analytics Ivana Seric, college basketball player becomes a data scientist. Steven Levitt explores the value of car seats Disclaimers at the end. Why the term "Data Science" is so confusing The two main types of data scientists: A nalysis and B uilding.

Chapter 1: Intro to Data. Videos for each section Introduction to Data: 5 videos. Slides 1. Chapter 2: Summarizing Data. Videos for each section Summarizing Data: 3 videos. Slides 2. Weighted mean Supplemental section: How and when to use weighting. Chapter 3: Probability. Videos for some sections Probability: 3 videos. Would you take this bet?

Thinking through probability and risk. Slides 3. Chapter 4: Distributions. Videos for some sections Distributions: 3 videos. Slides 4.



0コメント

  • 1000 / 1000