Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Comp Stat

Vertex nomination

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Vertex nomination is a subclass of recommender systems operating on attributed graphs. Attributed graphs are an attractive way to represent data from a diverse set of natural and manmade phenomena. Frequently, these data have latent attributes or class memberships that are of interest and uncovering them is of some intrinsic value. So‐called recommender systems address the ‘more like this’ problem: given a subset of the data labeled as ‘interesting’, find unlabeled examples that are ‘similarly interesting’, often with the assumption that they are from the same latent class or possess similar latent attributes. Unsurprisingly, recommender systems operating on attributed graphs have generated an interesting body of research, detailing both theoretical and practical advances for a range of applications. This advanced review examines the relevant literature, particularly focused on the importance and inclusion of edge‐ and vertex attributes, used in conjunction with the graph structure. We include example applications from human language technology, biology, and neuroscience for concreteness, although the algorithms discussed are widely applicable. WIREs Comput Stat 2014, 6:144–153. doi: 10.1002/wics.1294 This article is categorized under: Algorithms and Computational Methods > Quadratic and Nonlinear Programming
Illustration of our attributed graph model, with a legend visually indicating our notation.
[ Normal View | Magnified View ]
Each point in this simplex indicates a combination of weights to combine a set of four human language technologies (HLTs) and context when calculating vertex score. The axes (clockwise from the bottom) indicate the weight assigned to context (γ0), the combined weight from three HLTs γ1 + γ2 + γ3, and the weight for using a ‘compression’ HLT (γ4). The 30 highest‐scoring γs for different objective measures is plotted—green points for mean average precision (MAP), blue for minimum reciprocal rank (MRR), red for P@5 and for P@10.
[ Normal View | Magnified View ]
Vertex nomination performance on Enron data. The y‐axis denotes minimum reciprocal rank (MRR)—higher is better. The x‐axis denotes one parameter of an importance sampling procedure, roughly interpretable as the amount of data available for vertex nomination—higher indicates more edges, and thus information available from both content and context. The black line marked with X's denotes the performance using context information alone (γ0 = 1). Each colored line with N's denotes the performance of a single human language technology (HLT) (a single γi = 1, i > 0). Each colored + line corresponds to the fusion (γ0 = 0.5, γi = 0.5) of equal parts context and content (effectively mixing the information from the black X line and the corresponding colored N line).
[ Normal View | Magnified View ]

Browse by Topic

Algorithms and Computational Methods > Quadratic and Nonlinear Programming

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts