lecture4

lecture4 - Lecture 4: Noise? Last time We started by taking...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Lecture 4: Noise?
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Last time We started by taking up our discussion of developing a boxplot for more than a single variable, a graphic to summarize the shape of a 2-dimensional point cloud We then examined tools for viewing (continuous) data in 2 or more dimensions, spending some time with projections and linked displays We ended with some material for your (first) homework assignment -- The subject of graphics will not end here, however, in that we’ll also examine spatial (map-based) data as well as text as data later in the term
Background image of page 2
Today We will give you a little tour of R in an attempt to better prepare you for doing something really innovative with the Registrar’s data! We’ll talk a little about the history of R and then operations on vectors (we’ll do something like this every other lecture or so) Then, we’ll look at inference, examining a simple randomized controlled trial that comes up in A/B testing -- We’ll take a small historical detour and talk about the first such trial (which came up in a medical application) If there’s time, we’ll talk a bit about random number generation!
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
A brief and abridged history of statistical computing. ..
Background image of page 4
Statistical Computing in the 1960’s A number of statistical systems and programs already existed; BMD and P-Stat were in current use and GenStat was being developed These systems grew out of specific application areas and tended to offer pre-packaged analyses At the time, most statistics researchers would not be directly involved in analyzing data; programmers (read graduate students) would do the grubby work when necessary The systems like BMD and SAS, for example, and PStat, to some extent, and GenStat’s another good example, all grew up in environments where statisticians were required to do some fairly committed routine analysis . So BMD, of course comes from a biomedical field; SAS from several areas but medical, again; and GenStat comes from an agricultural background. Now in all those situations, the statistics groups, amongst other duties, were expected to be doing kind of analysis to order. You know, the data would come along from a experiment, or a clinical trial, or other sources, and as part of the job of the statisticians to produce analysis. Now often the analysis that they produced were relatively predetermined, or at least that’s how it worked out. Interview with John Chambers, 2002
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
The mid 1960’s at Bell Labs Statistics Research at Bell Labs tackled large-scale data analysis projects with teams of researchers and “programmers” Unlike much of the statistical computing of the day, this kind of work was not well suited to pre-packaged programs Even then, AT&T was creating large-scale applications; data from TelStar, an early (1960s) communications satellite, involved tens of thousands of observations Launched by NASA aboard a Delta rocket from Cape Canaveral on July 10, 1962, Telstar was the first privately sponsored space launch. A medium- altitude satellite, Telstar was placed in an elliptical orbit (completed once
Background image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 143

lecture4 - Lecture 4: Noise? Last time We started by taking...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online