Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-xbtfd Total loading time: 0 Render date: 2024-11-05T10:00:51.777Z Has data issue: false hasContentIssue false

22 - Sample Size Choice for Microarray Experiments

Published online by Cambridge University Press:  23 November 2009

Kim-Anh Do
Affiliation:
University of Texas, MD Anderson Cancer Center
Peter Müller
Affiliation:
Swiss Federal Institute of Technology, Zürich
Marina Vannucci
Affiliation:
Rice University, Houston
Get access

Summary

Abstract

We review Bayesian sample size arguments for microarray experiments, focusing on a decision theoretic approach. We start by introducing a choice based on minimizing expected loss as theoretical ideal. Practical limitations of this approach quickly lead us to consider a compromise solution that combines this idealized solution with a sensitivity argument. The finally proposed approach relies on conditional expected loss, conditional on an assumed true level of differential expression to be discovered. The expression for expected loss can be interpreted as a version of power, thus providing for ease of interpretation and communication

Introduction

We discuss approaches for a Bayesian sample size argument in microarray experiments. As is the case for most sample size calculations in clinical trials and other biomedical applications the nature of the sample size calculation is to provide the investigator with decision support, and allow an informed sample size choice, rather than providing a black-box method to deliver an optimal sample size.

Several classical approaches for microarray sample size choices have been proposed in the recent literature. Pan et al. (2002) develop a traditional power argument, using a finite mixture of normal sampling model for difference scores in a group comparison microarray experiment. Zien et al. (2002) propose to plot ROC-type curves to show achievable combinations of false-negative and falsepositive rates. Mukherjee et al. (2003) use a machine learning perspective. They consider a parametric learning curve for the empirical error rate as a function of the sample size, and proceed to estimate the unknown parameters in the learning curve.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2006

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×