We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Gaussian graphical models are useful tools for conditional independence structure inference of multivariate random variables. Unfortunately, Bayesian inference of latent graph structures is challenging due to exponential growth of $\mathcal{G}_n$, the set of all graphs in n vertices. One approach that has been proposed to tackle this problem is to limit search to subsets of $\mathcal{G}_n$. In this paper we study subsets that are vector subspaces with the cycle space $\mathcal{C}_n$ as the main example. We propose a novel prior on $\mathcal{C}_n$ based on linear combinations of cycle basis elements and present its theoretical properties. Using this prior, we implement a Markov chain Monte Carlo algorithm, and show that (i) posterior edge inclusion estimates computed with our technique are comparable to estimates from the standard technique despite searching a smaller graph space, and (ii) the vector space perspective enables straightforward implementation of MCMC algorithms.
Chapter 1: In this chapter, we provide formal definitions of real and complex vector spaces, and many examples. Among the important concepts introduced are linear combinations, span, linear independence, and linear dependence.
In this appendix we review some essential concepts regarding finite dimensional vector spaces, their bases, norms, and inner products that are essential for the reader and appear repeatedly throughout the text.
Viewing an algebraic number field as a vector space relative to a subfield, which was foreshadowed in Chapter 4, involves varying the field of "scalars" in the definition of vector space. This leads in turn to relative concepts of "basis" and "dimension" which must be taken into account in algebraic number theory. In this chapter we review linear algebra from the ground up, with an emphasis on the relative point of view. This brings some nonstandard results into the picture, such as the Dedekind product theorem and the representation of algebraic numbers by matrices.
In set theory without the Axiom of Choice (
$\mathsf {AC}$
), we investigate the open problem of the deductive strength of statements which concern the existence of almost disjoint and maximal almost disjoint (MAD) families of infinite-dimensional subspaces of a given infinite-dimensional vector space, as well as the extension of almost disjoint families in infinite-dimensional vector spaces to MAD families.
In this paper, we study the relation of the size of the class two quotients of a linear group and the size of the vector space. We answer a question raised in Keller and Yang [Class 2 quotients of solvable linear groups, J. Algebra 509 (2018), 386-396].
We briefly review some standard material about the category of vector spaces. The discussion includes the kernel, cokernel, image, coimage of a linear map, the duality functor, the internal hom for the tensor product, and idempotent operators.
This paper serves as a short overview of the JNLE special issue on representation of the meaning of the sentence, bringing together traditional symbolic and modern continuous approaches. We indicate notable aspects of sentence meaning and their compatibility with the two streams of research and then summarize the papers selected for this special issue.
Using Mathematica and the Wolfram Language to engage with the concepts of linear algebra. Includes solving systems of linear equations, vector spaces, Gaussian elimination, eigenvalues, eigenvectors.
We propose a fundamentally new approach to Datalog evaluation. Given a linear Datalog program DB written using N constants and binary predicates, we first translate if-and-only-if completions of clauses in DB into a set Eq(DB) of matrix equations with a non-linear operation, where relations in MDB, the least Herbrand model of DB, are encoded as adjacency matrices. We then translate Eq(DB) into another, but purely linear matrix equations Ẽq(DB). It is proved that the least solution of Ẽq(DB) in the sense of matrix ordering is converted to the least solution of Eq(DB) and the latter gives MDB as a set of adjacency matrices. Hence, computing the least solution of Ẽq(DB) is equivalent to computing MDB specified by DB. For a class of tail recursive programs and for some other types of programs, our approach achieves O(N3) time complexity irrespective of the number of variables in a clause since only matrix operations costing O(N3) or less are used. We conducted two experiments that compute the least Herbrand models of linear Datalog programs. The first experiment computes transitive closure of artificial data and real network data taken from the Koblenz Network Collection. The second one compared the proposed approach with the state-of-the-art symbolic systems including two Prolog systems and two ASP systems, in terms of computation time for a transitive closure program and the same generation program. In the experiment, it is observed that our linear algebraic approach runs 101 ~ 104 times faster than the symbolic systems when data is not sparse. Our approach is inspired by the emergence of big knowledge graphs and expected to contribute to the realization of rich and scalable logical inference for knowledge graphs.