Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-26T16:15:10.386Z Has data issue: false hasContentIssue false

EndNote: Feature-based classification of networks

Published online by Cambridge University Press:  23 September 2019

Ian Barnett*
Affiliation:
Department of Biostatistics, University of Pennsylvania, Philadelphia, PA 19104, USA
Nishant Malik*
Affiliation:
Mathematical Sciences, Rochester Institute of Technology, Rochester, NY 14623, USA (Email: [email protected])
Marieke L. Kuijjer
Affiliation:
Biostatistics and Computational Biology, Dana Farber Cancer Institute, Boston, MA 02115, USA (Email: [email protected])
Peter J. Mucha
Affiliation:
Department of Mathematics, University of North Carolina, Chapel Hill, NC 27599, USA (Email: [email protected])
Jukka-Pekka Onnela
Affiliation:
Department of Biostatistics, Harvard University, Boston, MA 02115, USA (Email: [email protected])
*
*Corresponding author. Email: [email protected]
*Corresponding author. Email: [email protected]

Abstract

Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural features. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. Within each such class, networks describing similar systems tend to have similar features. This occurs presumably because networks representing similar systems would be expected to be generated by a shared set of domain-specific mechanisms, and it should therefore be possible to classify networks based on their features at various structural levels. Here we describe and demonstrate a new hybrid approach that combines manual selection of network features of potential interest with existing automated classification methods. In particular, selecting well-known network features that have been studied extensively in social network analysis and network science literature, and then classifying networks on the basis of these features using methods such as random forest, which is known to handle the type of feature collinearity that arises in this setting, we find that our approach is able to achieve both higher accuracy and greater interpretability in shorter computation time than other methods.

Type
Original Article
Copyright
© Cambridge University Press 2019 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

These authors contributed equally

References

Borgwardt, K. M, & Kriegel, H.-P. (2005). Shortest-path kernels on graphs. In Fifth IEEE International Conference on Data Mining (ICDM’05). IEEE, 8 pages.CrossRefGoogle Scholar
Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4), 193202.CrossRefGoogle ScholarPubMed
Gärtner, T., Flach, P., & Wrobel, S. (2003). On graph kernels: Hardness results and efficient alternatives. Learning Theory and Kernel Machines. Springer, pp. 129143.CrossRefGoogle Scholar
Horváth, T., Gärtner, T., & Wrobel, S. (2004). Cyclic pattern kernels for predictive graph mining. Proceedings of the Tenth ACM Sigkdd International Conference on Knowledge Discovery and Data Mining. ACM, pp. 158167.CrossRefGoogle Scholar
Kashima, H., Tsuda, K., & Inokuchi, A. (2003). Marginalized kernels between labeled graphs. ICML, vol. 3, pp. 321328.Google Scholar
Kondor, R., Shervashidze, N., & Borgwardt, K. M. (2009). The graphlet spectrum. Proceedings of the 26th Annual International Conference on Machine Learning. ACM, pp. 529536.CrossRefGoogle Scholar
Kondor, R. I., & Lafferty, J. (2002). Diffusion kernels on graphs and other discrete input spaces. ICML, vol. 2, pp. 315322.Google Scholar
Kuijjer, M. L., Tung, M., Yuan, G., Quackenbush, J., & Glass, K. (2015). Estimating sample-specific regulatory networks. arxiv preprint arXiv:1505.06440.Google Scholar
Niepert, M., Ahmed, M., & Kutzkov, K. (2016). Learning convolutional neural networks for graphs. arxiv preprint arXiv:1605.05273.Google Scholar
Onnela, J.-P., Fenn, D. J., Reid, S., Porter, M. A., Mucha, P. J., Fricker, M. D., & Jones, N. S. (2012). Taxonomies of networks from community structure. Physical Review E, 86(3), 036104.CrossRefGoogle ScholarPubMed
Pennacchiotti, M., & Popescu, A.-M. (2011). A machine learning approach to twitter user classification. ICWSM, 11(1), 281288.Google Scholar
Ralaivola, L., Swamidass, S. J., Saigo, H., & Baldi, P. (2005). Graph kernels for chemical informatics. Neural Networks, 18(8), 10931110.CrossRefGoogle ScholarPubMed
Ramon, J., & Gärtner, T. (2003). Expressivity versus efficiency of graph kernels. First International Workshop on Mining Graphs, Trees and Sequences. Citeseer, pp. 6574.Google Scholar
Richiardi, J., Achard, S., Bullmore, E., & Van De Ville, D. (2011). Classifying connectivity graphs using graph and vertex attributes. 2011 International Workshop on Pattern Recognition in Neuroimaging (PRNI). IEEE, pp. 4548.CrossRefGoogle Scholar
Saxe, A. M., Koh, P. W., Chen, Z., Bhand, M., Suresh, B., & Ng, A. Y. (2011). On random weights and unsupervised feature learning. ICML, pp. 10891096.Google Scholar
Shervashidze, N., & Borgwardt, K. M. (2009). Fast subtree kernels on graphs. Advances in Neural Information Processing Systems, pp. 16601668.Google Scholar
Thoma, M., Cheng, H., Gretton, A., Han, J., Kriegel, H.-P., Smola, A., Song, L., Yu, P. S., Yan, X., & Borgwardt, K. M. (2010). Discriminative frequent subgraph mining with optimality guarantees. Statistical Analysis and Data Mining, 3(5), 302318.CrossRefGoogle Scholar
Yanardag, P., & Vishwanathan, S. V. N. (2015). Deep graph kernels. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, pp. 13651374.CrossRefGoogle Scholar
Supplementary material: PDF

Barnett et al. supplementary material

Barnett et al. supplementary material

Download Barnett et al. supplementary material(PDF)
PDF 2.1 MB