Seminar Series, 2007-2008


During the academic year the department offers a seminar series consisting of outside speakers, current faculty, and advanced graduate students. Suggestions for topics or speakers are welcome and may be emailed to the seminar chairperson, Dr. Mai Zhou, at mai@ms.uky.edu.

Unless otherwise stated, all seminars are at 4:00pm with refreshments preceding the seminar at 3:30pm in POT845.

Thursday, September 13, 2007, MathiasDrton,  University of Chicago.
CB 110. Refreshments in POT 845 at 3:30pm. Likelihood ratio tests and singularities


Friday, September 28, 2007. Feifang Hu, University of Virginia.   CB 102 4:00PM

Adaptive Randomization in Clinical Trials   While clinical trials may provide information on new treatments that can impact countless lives in the future, the act of randomization means that volunteers in the clinical trial will receive the benefit of the new treatment only by chance. In most clinical trials, an attempt is made to balance the treatment assignments equally, thus the probability that a volunteer will receive the potentially better treatment is only 50%. Response-adaptive randomization uses accruing data to skew the allocation probabilities to favor the treatment performing better thus far in the trial, thereby mitigating the problem to some degree. In this talk, I give a brief review of adaptive randomizations. Then I propose some new response-adaptive randomization procedures that have some desirable properties. The resulting randomization procedures provide efficient methods to determine whether a new treatment is effective in a clinical trial, while simultaneously minimizing a clinical trial volunteer's chance of being assigned to the inferior treatment.

CB102   4:00PM

Friday, Oct. 26.   Room  CB 102  4:00PM      Knut Wittkowski, 
Center for Clinical and Translational Science
The Rockefeller University, New York, NY 10021

Bioinformatics Tools Enabling U-Statistics. From Sports to Microarrays
U-statistics for univariate data (McNemar 1947, Mann-Whitney 1947) and censored data (Gehan 1965)
are commonly used. We will extend u-statistics to multivariate data with innovative applications in
sports, economics, sociology, biology, and medicine. First, we will present stratification
as a means to improve McNemar-type tests for trio data in genetics (TDT) and develop more
powerful variants for various genetic models. Then, we will discusses the history of
u-statistics, extends u-scores to multivariate data using a simple representation of partial
orderings, and present a unified procedure for u-statistics in stratified designs (Friedman
1937) with replications (Kruskal-Wallis, 1952). Using examples (often from sports) to demonstrate
how information about relationships between variables can be incorporated through (a)
transforming data, (b) converting data into partial orderings, and (c) combining partial
orderings, we will discusses computational and statistical aspects of screening studies
involving thousands of variables (SNP or gene-expression microarrays) and of
non-parametric factor analyses. Finally, we will discuss the use of such methods for
microarray quality control ('harshlighting'), data mining, and personalized medicine. The tools
presented will consist of spreadsheets, functions from the packages muStat and 'harshlight'
(available from  http://cran.r-project.org and http://csan.insightful.com/),
and Web-services available from http://muStat.rockefeller.edu.
Prerequisites: Basic knowledge of statistics and
programming. Recommended Textbook: Lehmann EL
(1975). Nonparametrics: statistical methods based on ranks. Holden-Day.



Friday, November 2, 2007.  Manfred Denker, Case Western Reserve University and
Professor, Institut fur Mathematische Stochastik Georg-August-Universitat Gottingen, Germany

A New Type of Bootstrapping Based on Almost Sure Limit Theorems

Bootstrap methods are used to estimate quantiles of an unknown
distribution. Almost sure limit theory is another way of estimating
quantiles of an unknown distribution. It performs numerically very
similar to the classical bootstrapping, sometimes even better. We
explain this method and introduce estimation and testing procedures
using the new approach. We also explain the theoretical background,
which is connected to Brownian motion and the de Moivre-Laplace theorem.

Spring 2008

Wednesday, February 20, 2008, Time 4:00 PM, Room TBA
Reinhard Laubenbacher, Virginia Bioinformatics Institute at Virginia Tech University

Design of Experiments and Biochemical Network Inference
Design of experiments is a branch of statistics that aims to identify effcient procedures for planning experiments in order to optimize knowledge discovery. Network inference is a subfield of systems biology devoted to the identification of biochemical networks from experimental data. Common to both areas of research is their focus on the maximization of information gathered from experimentation. The goal of this talk is to establish a connection between these two areas coming from the common use of polynomial models and techniques from computational algebra.


Friday Feb. 29, 2008 Room: CB102  4:00PM   Prof. Yichuan Zhao, Georgia State University  

Omnibus Tests for comparison of competing risks with covariate effects via additive risk model
It is of interest that researchers study competing risks in which subjects may fail from any
one of K causes. Comparing any two competing risks with covariate effects is very important in
medical studies. We develop omnibus tests for comparing cause-specific hazard rates
and cumulative incidence functions at specified covariate levels. In the paper, the omnibus tests
are derived under the additive risk model  by a weighted difference of estimates of cumulative
cause-specific hazard rates. Simultaneous confidence bands for the difference of two conditional cumulative incidence
functions are also constructed. A simulation procedure is used to sample from the null
distribution of the test process in which the graphical and numerical techniques are used to detect
the significant difference in the risks.
In addition, we conduct a simulation study, and it shows that the proposed procedure has a good finite sample
performance. A melanoma data set in clinical trial is used for the purpose of illustration.

TBA  Steen Andersen, Dept.of Statistics, Indiana University

Friday, March 21, 2008     4:00PM Prof. Zheng Qi, Texas A and M University  CB 102

An Overview of the Luria-Delbruck Distribution
The Luria-Delbruck distribution originated in the 1940s in the study of bacterial mutation. Several statistical
giants made fundamental contributions to the field at the beginning; among them are J.B.S. Haldane, M.S. Bartlett,
D.G. Kendall, P. Armitage, and possibly R.A. Fisher -- according to an anecdote told by the prominent geneticist
J.F. Crow. The latest version of the encyclopedic work on discrete distributions (Univariate Discrete Distributions
by Johnson, Kemp and Kotz, 3rd edition, 2005) devoted no more than three pages (pp.505-507) to the Luria-Delbruck
distribution, as many practical issues (e.g. likelihood estimation) were still unsolved when the third version
of the book was being compiled. In fact, the tome of Johnson et al. touches only a special case of a rather
rich family of discrete distributions that were inspired by the Nobel prize winning work of Luria and Delbruck.
In the past six years or so, these distributions were further scrutinized and most of the long-standing practical
issues were resolved. This talk presents an overview of the developments during the last six decades, focusing
on some new algorithms devised in the past six years that made the Luria-Delbruck distribution a tremendously useful
tool in estimating microbial mutation rates.   slides

Friday, April 25, 2008   Prof. Bruce Turnbull, Cornell University (R.L. Anderson Lecture)  
Reception  3:00PM,  Lecture 4:00PM  at 18th floor POT

   ADAPTIVE AND NON-ADAPTIVE GROUP SEQUENTIAL DESIGNS
                                   FOR CLINICAL TRIALS

Methods have been proposed to re-design a clinical trial at an interim stage in order to increase power.
This may be in response to external factors which indicate power should be sought at a smaller effect size,
or it could be a reaction to data observed in the study itself.  In order to preserve the type I error rate,
methods for unplanned design change have to be defined in terms of non-sufficient statistics and this calls
into question their efficiency and the credibility of conclusions reached.  We evaluate schemes for adaptive
re-design, assessing the possible benefits of pre-planned adaptive designs by numerical computation of
optimal tests; these optimal adaptive designs are concrete examples of optimal sequentially planned
sequential tests proposed by Schmitz (1993).  We conclude that the flexibility of unplanned adaptive designs
comes at a price and we recommend the appropriate power for a study should be determined as
thoroughly as possible at the outset.  Then, standard error spending tests, possibly with unevenly spaced
analyses, provide efficient designs. However it is still possible to fall back on flexible methods for re-design
should study objectives change unexpectedly once the trial is under way.
(This is joint work with Chris Jennison.)


UK Statistics Home >> Seminar Series

page last updated 15 April-2008.