Data Analysis A Bayesian Tutorial

by ;
Edition: 2nd
Format: Hardcover
Pub. Date: 2006-07-27
Publisher(s): Oxford University Press
List Price: $149.81

Rent Textbook

Select for Price
There was a problem. Please try again later.

Digital

Rent Digital Options
Online:180 Days access
Downloadable:180 Days
$32.99
Online:365 Days access
Downloadable:365 Days
$37.50
Online:1460 Days access
Downloadable:Lifetime Access
$49.99
*To support the delivery of the digital material to you, a non-refundable digital delivery fee of $3.99 will be charged on each digital item.
$39.59*

New Textbook

We're Sorry
Sold Out

Used Textbook

We're Sorry
Sold Out

How Marketplace Works:

  • This item is offered by an independent seller and not shipped from our warehouse
  • Item details like edition and cover design may differ from our description; see seller's comments before ordering.
  • Sellers much confirm and ship within two business days; otherwise, the order will be cancelled and refunded.
  • Marketplace purchases cannot be returned to eCampus.com. Contact the seller directly for inquiries; if no response within two days, contact customer service.
  • Additional shipping costs apply to Marketplace purchases. Review shipping costs at checkout.

Summary

Statistics lectures have been a source of much bewilderment and frustration for generations of students. This book attempts to remedy the situation by expounding a logical and unified approach to the whole subject of data analysis. This text is intended as a tutorial guide for senior undergraduates and research students in science and engineering. After explaining the basic principles of Bayesian probability theory, their use is illustrated with a variety of examples ranging from elementary parameter estimation to image processing. Other topics covered include reliability analysis, multivariate optimization, least-squares and maximum likelihood, error-propagation, hypothesis testing, maximum entropy and experimental design. The Second Edition of this successful tutorial book contains a new chapter on extensions to the ubiquitous least-squares procedure, allowing for the straightforward handling of outliers and unknown correlated noise, and a cutting-edge contribution from John Skilling on a novel numerical technique for Bayesian computation called 'nested sampling'.

Author Biography


Devinderjit Singh Sivia
Rutherford Appleton Laboratory
Chilton
Oxon
OX11 5DJ John Skilling
Maximum Entropy Data Consultants
42 Southgate Street
Bury St Edmonds
Suffolk
IP33 2AZ

Table of Contents

PART I THE ESSENTIALS
1. The basics
3(11)
1.1 Introduction: deductive logic versus plausible reasoning
3(1)
1.2 Probability: Cox and the rules for consistent reasoning
4(1)
1.3 Corollaries: Bayes' theorem and marginalization
5(3)
1.4 Some history: Bayes, Laplace and orthodox statistics
8(4)
1.5 Outline of book
12(2)
2. Parameter estimation I
14(21)
2.1 Example 1: is this a fair coin?
14(6)
2.1.1 Different priors
17(2)
2.1.2 Sequential or one-step data analysis?
19(1)
2.2 Reliabilities: best estimates, error-bars and confidence intervals
20(6)
2.2.1 The coin example
23(1)
2.2.2 Asymmetric posterior pdfs
24(1)
2.2.3 Multimodal posterior pdfs
25(1)
2.3 Example 2: Gaussian noise and averages
26(3)
2.3.1 Data with different-sized error-bars
29(1)
2.4 Example 3: the lighthouse problem
29(6)
2.4.1 The central limit theorem
33(2)
3. Parameter estimation II
35(43)
3.1 Example 4: amplitude of a signal in the presence of background
35(8)
3.1.1 Marginal distributions
39(3)
3.1.2 Binning the data
42(1)
3.2 Reliabilities: best estimates, correlations and error-bars
43(9)
3.2.1 Generalization of the quadratic approximation
49(1)
3.2.2 Asymmetric and multimodal posterior pdfs
50(2)
3.3 Example 5: Gaussian noise revisited
52(3)
3.3.1 The Student-t and χ² distributions
54(1)
3.4 Algorithms: a numerical interlude
55(6)
3.4.1 Brute force and ignorance
56(1)
3.4.2 The joys of linearity
57(1)
3.4.3 Iterative linearization
58(2)
3.4.4 Hard problems
60(1)
3.5 Approximations: maximum likelihood and least-squares
61(7)
3.5.1 Fitting a straight line
65(3)
3.6 Error-propagation: changing variables
68(10)
3.6.1 A useful short cut
73(1)
3.6.2 Taking the square root of a number
74(4)
4. Model selection
78(25)
4.1 Introduction: the story of Mr A and Mr B
78(7)
4.1.1 Comparison with parameter estimation
83(1)
4.1.2 Hypothesis testing
84(1)
4.2 Example 6: how many lines are there?
85(9)
4.2.1 An algorithm
89(2)
4.2.2 Simulated data
91(2)
4.2.3 Real data
93(1)
4.3 Other examples: means, variance, dating and so on
94(9)
4.3.1 The analysis of means and variance
94(4)
4.3.2 Luminescence dating
98(2)
4.3.3 Interlude: what not to compute
100(3)
5. Assigning probabilities
103(26)
5.1 Ignorance: indifference and transformation groups
103(7)
5.1.1 The binomial distribution
107(1)
5.1.2 Location and scale parameters
108(2)
5.2 Testable information: the principle of maximum entropy
110(7)
5.2.1 The monkey argument
113(2)
5.2.2 The Lebesgue measure
115(2)
5.3 MaxEnt examples: some common pdfs
117(4)
5.3.1 Averages and exponentials
117(1)
5.3.2 Variance and the Gaussian distribution
118(2)
5.3.3 MaxEnt and the binomial distribution
120(1)
5.3.4 Counting and Poisson statistics
121(1)
5.4 Approximations: interconnections and simplifications
121(3)
5.5 Hangups: priors versus likelihoods
124(5)
5.5.1 Improper pdfs
124(1)
5.5.2 Conjugate and reference priors
125(4)
PART II ADVANCED TOPICS
6. Non-parametric estimation
129(20)
6.1 Introduction: free-form solutions
129(7)
6.1.1 Singular value decomposition
130(5)
6.1.2 A parametric free-form solution?
135(1)
6.2 MaxEnt: images, monkeys and a non-uniform prior
136(4)
6.2.1 Regularization
138(2)
6.3 Smoothness: fuzzy pixels and spatial correlations
140(2)
6.3.1 Interpolation
141(1)
6.4 Generalizations: some extensions and comments
142(7)
6.4.1 Summary of the basic strategy
144(1)
6.4.2 Inference or inversion?
145(3)
6.4.3 Advanced examples
148(1)
7. Experimental design
149(16)
7.1 Introduction: general issues
149(2)
7.2 Example 7: optimizing resolution functions
151(10)
7.2.1 An isolated sharp peak
152(4)
7.2.2 A free-form solution
156(5)
7.3 Calibration, model selection and binning
161(2)
7.4 Information gain: quantifying the worth of an experiment
163(2)
8. Least-squares extensions
165(16)
8.1 Introduction: constraints and restraints
165(1)
8.2 Noise scaling: a simple global adjustment
166(1)
8.3 Outliers: dealing with erratic data
167(6)
8.3.1 A conservative formulation
168(3)
8.3.2 The good-and-bad data model
171(1)
8.3.3 The Cauchy formulation
172(1)
8.4 Background removal
173(1)
8.5 Correlated noise: avoiding over-counting
174(5)
8.5.1 Nearest-neighbour correlations
175(1)
8.5.2 An elementary example
176(1)
8.5.3 Time series
177(2)
8.6 Log-normal: least-squares for magnitude data
179(2)
9. Nested sampling
181(28)
9.1 Introduction: the computational problem
181(3)
9.1.1 Evidence and posterior
182(2)
9.2 Nested sampling: the basic idea
184(6)
9.2.1 Iterating a sequence of objects
185(1)
9.2.2 Terminating the iterations
186(1)
9.2.3 Numerical uncertainty of computed results
187(1)
9.2.4 Programming nested sampling in 'C'
188(2)
9.3 Generating a new object by random sampling
190(5)
9.3.1 Markov chain Monte Carlo (MCMC) exploration
191(1)
9.3.2 Programming the lighthouse problem in 'C'
192(3)
9.4 Monte Carlo sampling of the posterior
195(5)
9.4.1 Posterior distribution
196(1)
9.4.2 Equally-weighted posterior samples: staircase sampling
197(1)
9.4.3 The lighthouse posterior
198(1)
9.4.4 Metropolis exploration of the posterior
199(1)
9.5 How many objects are needed?
200(3)
9.5.1 Bi-modal likelihood with a single 'gate'
200(1)
9.5.2 Multi-modal likelihoods with several 'gates'
201(2)
9.6 Simulated annealing
203(6)
9.6.1 The problem of phase changes
203(1)
9.6.2 Example: order/disorder in a pseudo-crystal
204(2)
9.6.3 Programming the pseudo-crystal in 'C'
206(3)
10. Quantification
209(15)
10.1 Exploring an intrinsically non-uniform prior
209(3)
10.1.1 Binary trees for controlling MCMC transitions
210(2)
10.2 Example: ON/OFF switching
212(4)
10.2.1 The master engine: flipping switches individually
212(1)
10.2.2 Programming which components are present
212(3)
10.2.3 Another engine: exchanging neighbouring switches
215(1)
10.2.4 The control of multiple engines
216(1)
10.3 Estimating quantities
216(7)
10.3.1 Programming the estimation of quantities in 'C'
218(5)
10.4 Final remarks
223(1)
A. Gaussian integrals 224(5)
A.1 The univariate case
224(1)
A.2 The bivariate extension
225(1)
A.3 The multivariate generalization
226(3)
B. Cox's derivation of probability 229(8)
B.1 Lemma 1: associativity equation
232(3)
B.2 Lemma 2: negation
235(2)
Bibliography 237(4)
Index 241

An electronic version of this book is available through VitalSource.

This book is viewable on PC, Mac, iPhone, iPad, iPod Touch, and most smartphones.

By purchasing, you will be able to view this book online, as well as download it, for the chosen number of days.

Digital License

You are licensing a digital product for a set duration. Durations are set forth in the product description, with "Lifetime" typically meaning five (5) years of online access and permanent download to a supported device. All licenses are non-transferable.

More details can be found here.

A downloadable version of this book is available through the eCampus Reader or compatible Adobe readers.

Applications are available on iOS, Android, PC, Mac, and Windows Mobile platforms.

Please view the compatibility matrix prior to purchase.