{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

back-matter - References Abu-Mostafa Y(1995 Hints Neural...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
References Abu-Mostafa, Y. (1995). Hints, Neural Computation 7 : 639–671. Ackley, D. H., Hinton, G. and Sejnowski, T. (1985). A learning algorithm for Boltzmann machines, Trends in Cognitive Sciences 9 : 147–169. Adam, B.-L., Qu, Y., Davis, J. W., Ward, M. D., Clements, M. A., Cazares, L. H., Semmes, O. J., Schellhammer, P. F., Yasui, Y., Feng, Z. and Wright, G. (2003). Serum protein fingerprinting cou- pled with a pattern-matching algorithm distinguishes prostate cancer from benign prostate hyperplasia and healthy mean, Cancer Research 63 (10): 3609–3614. Agrawal, R., Mannila, H., Srikant, R., Toivonen, H. and Verkamo, A. I. (1995). Fast discovery of association rules, Advances in Knowledge Discovery and Data Mining , AAAI/MIT Press, Cambridge, MA. Agresti, A. (1996). An Introduction to Categorical Data Analysis , Wiley, New York. Agresti, A. (2002). Categorical Data Analysis (2nd Ed.) , Wiley, New York. Ahn, J. and Marron, J. (2005). The direction of maximal data piling in high dimensional space, Technical report , Statistics Department, University of North Carolina, Chapel Hill. Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle, Second International Symposium on Information Theory , pp. 267–281.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
700 References Allen, D. (1977). The relationship between variable selection and data augmentation and a method of prediction, Technometrics 16 : 125–7. Ambroise, C. and McLachlan, G. (2002). Selection bias in gene extraction on the basis of microarray gene-expression data, Proceedings of the National Academy of Sciences 99 : 6562–6566. Amit, Y. and Geman, D. (1997). Shape quantization and recognition with randomized trees, Neural Computation 9 : 1545–1588. Anderson, J. and Rosenfeld, E. (eds) (1988). Neurocomputing: Foundations of Research , MIT Press, Cambridge, MA. Anderson, T. (2003). An Introduction to Multivariate Statistical Analysis, 3rd ed. , Wiley, New York. Bach, F. and Jordan, M. (2002). Kernel independent component analysis, Journal of Machine Learning Research 3 : 1–48. Bair, E. and Tibshirani, R. (2004). Semi-supervised methods to predict patient survival from gene expression data, PLOS Biology 2 : 511–522. Bair, E., Hastie, T., Paul, D. and Tibshirani, R. (2006). Prediction by supervised principal components, Journal of the American Statistical Association 101 : 119–137. Bakin, S. (1999). Adaptive regression and model selection in data mining problems, Technical report , PhD. thesis, Australian National Univer- sity, Canberra. Banerjee, O., Ghaoui, L. E. and d’Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate gaus- sian or binary data, Journal of Machine Learning Research 9 : 485–516. Barron, A. (1993). Universal approximation bounds for superpositions of a sigmoid function, IEEE Transactions on Information Theory 39 : 930– 945. Bartlett, P. and Traskin, M. (2007). Adaboost is consistent, in B. Sch¨ olkopf, J. Platt and T. Hoffman (eds), Advances in Neural Infor- mation Processing Systems 19 , MIT Press, Cambridge, MA, pp. 105– 112. Becker, R., Cleveland, W. and Shyu, M. (1996). The visual design and con- trol of trellis display, Journal of Computational and Graphical Statis- tics 5 : 123–155.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}