Portal:Linear algebra
Portal maintenance status: (October 2018)

Introduction
Linear algebra is the branch of mathematics concerning linear equations such as
 $a_{1}x_{1}+\cdots +a_{n}x_{n}=b,$
linear functions such as
 $(x_{1},\ldots ,x_{n})\mapsto a_{1}x_{1}+\ldots +a_{n}x_{n},$
and their representations through matrices and vector spaces.
Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis may be basically viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and engineering areas, because it allows modeling many natural phenomena, and efficiently computing with such models. For nonlinear systems, which cannot be modeled with linear algebra, linear algebra is often used as a firstorder approximation.
Selected general articles
 In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorithms for solving ordinary and partial differential equations by discrete approximation.
In numerical linear algebra the principal concern is instabilities caused by proximity to singularities of various kinds, such as very small or nearly colliding eigenvalues. On the other hand, in numerical algorithms for differential equations the concern is the growth of roundoff errors and/or initially small fluctuations in initial data which might cause a large deviation of final answer from the exact solution . Read more...  In linear algebra, linear transformations can be represented by matrices. If $T$ is a linear transformation mapping $\mathbb {R} ^{n}$ to $\mathbb {R} ^{m}$ and ${\vec {x}}$ is a column vector with $n$ entries, then
:$T({\vec {x}})=\mathbf {A} {\vec {x}}$ Read more...  The following tables provide a comparison of linear algebra software libraries, either specialized or general purpose libraries with significant linear algebra coverage. Read more...
In linear algebra, an inner product space is a vector space with an additional structure called an inner product. This additional structure associates each pair of vectors in the space with a scalar quantity known as the inner product of the vectors. Inner products allow the rigorous introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors (zero inner product). Inner product spaces generalize Euclidean spaces (in which the inner product is the dot product, also known as the scalar product) to vector spaces of any (possibly infinite) dimension, and are studied in functional analysis. The first usage of the concept of a vector space with an inner product is due to Peano, in 1898.
An inner product naturally induces an associated norm, thus an inner product space is also a normed vector space. A complete space with an inner product is called a Hilbert space. An (incomplete) space with an inner product is called a preHilbert space, since its completion with respect to the norm induced by the inner product is a Hilbert space. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. Read more... Suppose V and W are vector spaces over the field K. The cartesian product V × W can be given the structure of a vector space over K (Halmos 1974, §18) by defining the operations componentwise:
 (v_{1}, w_{1}) + (v_{2}, w_{2}) = (v_{1} + v_{2}, w_{1} + w_{2})
 α (v, w) = (α v, α w)
A scalar is an element of a field which is used to define a vector space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector.
In linear algebra, real numbers or other elements of a field are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. More generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that vector space will be the elements of the associated field. Read more... In mathematics, the tensor product V ⊗ W of two vector spaces V and W (over the same field) is itself a vector space, together with an operation of bilinear composition, denoted by ⊗, from ordered pairs in the Cartesian product V × W into V ⊗ W, in a way that generalizes the outer product. The tensor product of V and W is the vector space generated by the symbols v ⊗ w, with v ∈ V and w ∈ W, in which the relations of bilinearity are imposed for the product operation ⊗, and no other relations are assumed to hold. The tensor product space is thus the "freest" (or most general) such vector space, in the sense of having the fewest constraints.
The tensor product of (finite dimensional) vector spaces has dimension equal to the product of the dimensions of the two factors:
:$\dim(V\otimes W)=\dim V\times \dim W.$
In particular, this distinguishes the tensor product from the direct sum vector space, whose dimension is the sum of the dimensions of the two summands:
:$\dim(V\oplus W)=\dim V+\dim W.$ Read more...
In numerical analysis and computer science, a sparse matrix or sparse array is a matrix in which most of the elements are zero. By contrast, if most of the elements are nonzero, then the matrix is considered dense. The number of zerovalued elements divided by the total number of elements (e.g., m × n for an m × n matrix) is called the sparsity of the matrix (which is equal to 1 minus the density of the matrix).
Conceptually, sparsity corresponds to systems that are loosely coupled. Consider a line of balls connected by springs from one to the next: this is a sparse system as only adjacent balls are coupled. By contrast, if the same line of balls had springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in combinatorics and application areas such as network theory, which have a low density of significant data or connections. Read more... In mathematics, and more specifically in linear algebra and functional analysis, the kernel (also known as null space or nullspace) of a linear map L : V → W between two vector spaces V and W, is the set of all elements v of V for which L(v) = 0, where 0 denotes the zero vector in W. That is, in setbuilder notation,
:$\ker(L)=\left\{\mathbf {v} \in V\mid L(\mathbf {v} )=\mathbf {0} \right\}{\text{.}}$ Read more...  In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems. Read more...
 In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.
This notion can be made more precise for an $n$ by $m$ matrix $M$ by partitioning $n$ into a collection $rowgroups$, and then partitioning $m$ into a collection $colgroups$. The original matrix is then considered as the "total" of these groups, in the sense that the $(i,j)$ entry of the original matrix corresponds in a 1to1 way with some $(s,t)$ offset entry of some $(x,y)$, where $x\in {\mathit {rowgroups}}$ and $y\in {\mathit {colgroups}}$. Read more...
In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.
Let $\mathbb {F}$ be a field. The column space of an m × n matrix with components from $\mathbb {F}$ is a linear subspace of the mspace $\mathbb {F} ^{m}$. The dimension of the column space is called the rank of the matrix and is at most min(m, n). A definition for matrices over a ring $\mathbb {K}$ is also possible. Read more...
In the theory of vector spaces, a set of vectors is said to be linearly dependent if one of the vectors in the set can be defined as a linear combination of the others; if no vector in the set can be written in this way, then the vectors are said to be linearly independent. These concepts are central to the definition of dimension.
A vector space can be of finitedimension or infinitedimension depending on the number of linearly independent basis vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining a basis for a vector space. Read more...
In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric or spatial vector, or—as here—simply a vector) is a geometric object that has magnitude (or length) and direction. Vectors can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by ${\overrightarrow {AB}}.$
A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "carrier". It was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Read more... In mathematics, the dot product or scalar product is an algebraic operation that takes two equallength sequences of numbers (usually coordinate vectors) and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used and often called inner product (or rarely projection product); see also inner product space.
Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle of two vectors is the quotient of their dot product by the product of their lengths). Read more...  In linear algebra, an nbyn square matrix A is called invertible (also nonsingular or nondegenerate) if there exists an nbyn square matrix B such that
:$\mathbf {AB} =\mathbf {BA} =\mathbf {I} _{n}\$ Read more...
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same set of variables. For example,
:${\begin{alignedat}{7}3x&&\;+\;&&2y&&\;\;&&z&&\;=\;&&1&\\<br>2x&&\;\;&&2y&&\;+\;&&4z&&\;=\;&&2&\\<br>x&&\;+\;&&{\tfrac {1}{2}}y&&\;\;&&z&&\;=\;&&0&<br>\end{alignedat}}$
is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
:${\begin{alignedat}{2}x&\,=\,&1\\<br>y&\,=\,&2\\<br>z&\,=\,&2<br>\end{alignedat}}$
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject which is used in most parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of nonlinear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Read more... In mathematics, any vector space V has a corresponding dual vector space (or just dual space for short) consisting of all linear functionals on V, together with the vector space structure of pointwise addition and scalar multiplication by constants.
The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the algebraic dual space. When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the continuous dual space. Read more...
In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.
Let $\mathbb {F}$ be a field. The column space of an m × n matrix with components from $\mathbb {F}$ is a linear subspace of the mspace $\mathbb {F} ^{m}$. The dimension of the column space is called the rank of the matrix and is at most min(m, n). A definition for matrices over a ring $\mathbb {K}$ is also possible. Read more... In mathematics and vector algebra, the cross product or vector product (occasionally directed area product to emphasize the geometric significance) is a binary operation on two vectors in threedimensional space $\left(\mathbb {R} ^{3}\right)$ and is denoted by the symbol $\times$. Given two linearly independent vectors $\mathbf {a}$ and $\mathbf {b}$, the cross product, $\mathbf {a} \times \mathbf {b}$, is a vector that is perpendicular to both $\mathbf {a}$ and $\mathbf {b}$ and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product (projection product).
If two vectors have the same direction (or have the exact opposite direction from one another, i.e. are not linearly independent) or if either one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths. The cross product is anticommutative (i.e., $\mathbf {a} \times \mathbf {b} =\mathbf {b} \times \mathbf {a}$) and is distributive over addition (i.e., $\mathbf {a} \times (\mathbf {b} +\mathbf {c} )=\mathbf {a} \times \mathbf {b} +\mathbf {a} \times \mathbf {c}$). The space $\mathbb {R} ^{3}$ together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Read more...
In mathematics, a set B of elements (vectors) in a vector space V is called a basis, if every element of V may be written in a unique way as a (finite) linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates on B of the vector. The elements of a basis are called basis vectors.
Equivalently B is a basis if its elements are linearly independent and every element of V is a linear combination of elements of B. In more general terms, a basis is a linearly independent spanning set. Read more... In linear algebra, Gaussian elimination (also known as row reduction) is an algorithm for solving systems of linear equations. It is usually understood as a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix. The method is named after Carl Friedrich Gauss (1777–1855), although it was known to Chinese mathematicians as early as 179 A.D. (see History section).
To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower lefthand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: Swapping two rows,
 Multiplying a row by a nonzero number,
 Adding a multiple of one row to another row.
Using these operations, a matrix can always be transformed into an upper triangular matrix, and in fact one that is in row echelon form. Once all of the leading coefficients (the leftmost nonzero entry in each row) are 1, and every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in reduced row echelon form. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where multiple elementary operations might be done at each step), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form. Read more...
A vector space (also called a linear space) is a collection of objects called vectors, which may be added together and multiplied ("scaled") by numbers, called scalars. Scalars are often taken to be real numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers, or generally any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called axioms, listed below.
Euclidean vectors are an example of a vector space. They represent physical quantities such as forces: any two forces (of the same type) can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, vectors representing displacements in the plane or in threedimensional space also form vector spaces. Vectors in vector spaces do not necessarily have to be arrowlike objects as they appear in the mentioned examples: vectors are regarded as abstract mathematical objects with particular properties, which in some cases can be visualized as arrows. Read more... In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.
The rank is commonly denoted rank(A) or rk(A); sometimes the parentheses are not written, as in rank A. Read more...
In computing, floatingpoint arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation so as to support a tradeoff between range and precision. For this reason, floatingpoint computation is often found in systems which include very small and very large real numbers, which require fast processing times. A number is, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:
: ${\text{significand}}\times {\text{base}}^{\text{exponent}},$
where significand is an integer (i.e., in Z), base is an integer greater than or equal to two, and exponent is also an integer.
For example:
: $1.2345=\underbrace {12345} _{\text{significand}}\times \underbrace {10} _{\text{base}}\!\!\!\!\!\!^{\overbrace {4} ^{\text{exponent}}}.$
The term floating point refers to the fact that a number's radix point (decimal point, or, more commonly in computers, binary point) can "float"; that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floatingpoint representation can be thought of as a kind of scientific notation. Read more...
MATLAB (matrix laboratory) is a multiparadigm numerical computing environment and proprietary programming language developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, C#, Java, Fortran and Python.
Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic computing abilities. An additional package, Simulink, adds graphical multidomain simulation and modelbased design for dynamic and embedded systems. Read more... In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of righthandsides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748 (and possibly knew of it as early as 1729).
Cramer's rule implemented in a naïve way is computationally inefficient for systems of more than two or three equations. In the case of n equations in n unknowns, it requires computation of n + 1 determinants, while Gaussian elimination produces the result with the same computational complexity as the computation of a single determinant.^{[verification needed]} Cramer's rule can also be numerically unstable even for 2×2 systems. However, it has recently been shown that Cramer's rule can be implemented in O(n^{3}) time, which is comparable to more common methods of solving systems of linear equations, such as Gaussian elimination (consistently requiring 2.5 times as many arithmetic operations for all matrix sizes), while exhibiting comparable numeric stability in most cases. Read more...  In mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). The concept of linear combinations is central to linear algebra and related fields of mathematics.
Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article. Read more...  In linear algebra, the linear span (also called the linear hull or just span) of a set of vectors in a vector space is the intersection of all linear subspaces which each contain every vector in that set. The linear span of a set of vectors is therefore a vector space. Spans can be generalized to matroids and modules.
For expressing that a vector space V is a span of a set S, one commonly uses the following phrases: S spans V; V is spanned by S; S is a spanning set of V; S is a generating set of V. Read more...
In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself such that P^{ 2} = P. That is, whenever P is applied twice to any value, it gives the same result as if it were applied once (idempotent). It leaves its image unchanged. Though abstract, this definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object. Read more... In vector algebra, a branch of mathematics, the triple product is a product of three 3dimensional vectors, usually Euclidean vectors. The name "triple product" is used for two different products, the scalarvalued scalar triple product and, less often, the vectorvalued vector triple product. Read more...
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal, that is it switches the row and column indices of the matrix by producing another matrix denoted as A^{T} (also written A′, A^{tr}, ^{t}A or A^{t}). It is achieved by any one of the following equivalent actions: reflect A over its main diagonal (which runs from topleft to bottomright) to obtain A^{T},
 write the rows of A as the columns of A^{T},
 write the columns of A as the rows of A^{T}.
Formally, the ith row, jth column element of A^{T} is the jth row, ith column element of A: Read more... In mathematics, the sevendimensional cross product is a bilinear operation on vectors in sevendimensional Euclidean space. It assigns to any two vectors a, b in R^{7} a vector a × b also in R^{7}. Like the cross product in three dimensions, the sevendimensional product is anticommutative and a × b is orthogonal both to a and to b. Unlike in three dimensions, it does not satisfy the Jacobi identity, and while the threedimensional cross product is unique up to a sign, there are many sevendimensional cross products. The sevendimensional cross product has the same relationship to the octonions as the threedimensional product does to the quaternions.
The sevendimensional cross product is one way of generalising the cross product to other than three dimensions, and it is the only other nontrivial bilinear product of two vectors that is vectorvalued, anticommutative and orthogonal. In other dimensions there are vectorvalued products of three or more vectors that satisfy these conditions, and binary products with bivector results. Read more...
In mathematics, a bivector or 2vector is a quantity in exterior algebra or geometric algebra that extends the idea of scalars and vectors. If a scalar is considered an order zero quantity, and a vector is an order one quantity, then a bivector can be thought of as being of order two. Bivectors have applications in many areas of mathematics and physics. They are related to complex numbers in two dimensions and to both pseudovectors and quaternions in three dimensions. They can be used to generate rotations in any number of dimensions, and are a useful tool for classifying such rotations. They also are used in physics, tying together a number of otherwise unrelated quantities.
Bivectors are generated by the exterior product on vectors: given two vectors a and b, their exterior product a ∧ b is a bivector, as is the sum of any bivectors. Not all bivectors can be generated as a single exterior product. More precisely, a bivector that can be expressed as an exterior product is called simple; in up to three dimensions all bivectors are simple, but in higher dimensions this is not the case. The exterior product of two vectors is anticommutative and alternating, so b ∧ a is the negation of the bivector a ∧ b, producing the opposite orientation, and a ∧ a is the zero bivector. Read more... In mathematics, matrix multiplication or matrix product is a binary operation that produces a matrix from two matrices with entries in a field, or, more generally, in a ring or even a semiring. The matrix product is designed for representing the composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering. In more detail, if A is an n × m matrix and B is an m × p matrix, their matrix product AB is an n × p matrix, in which the m entries across a row of A are multiplied with the m entries down a column of B and summed to produce an entry of AB. When two linear maps are represented by matrices, then the matrix product represents the composition of the two maps.
The definition of matrix product requires that the entries belong to a semiring, and does not require multiplication of elements of the semiring to be commutative. In many applications, the matrix elements belong to a field, although the tropical semiring is also a common choice for graph shortest path problems. Even in the case of matrices over fields, the product is not commutative in general, although it is associative and is distributive over matrix addition. The identity matrices (which are the square matrices whose entries are zero outside of the main diagonal and 1 on the main diagonal) are identity elements of the matrix product. It follows that the n × n matrices over a ring form a ring, which is noncommutative except if n = 1 and the ground ring is commutative. Read more...  In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows or columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. Read more...
In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module in abstract algebra). In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction. The term "scalar" itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication is the multiplication of a vector by a scalar (where the product is a vector), and must be distinguished from inner product of two vectors (where the product is a scalar). Read more... In linear algebra and related fields of mathematics, a linear subspace, also known as a vector subspace, or, in the older literature, a linear manifold, is a vector space that is a subset of some other (higherdimension) vector space. A linear subspace is usually called simply a subspace when the context serves to distinguish it from other kinds of subspace. Read more...
The vector projection of a vector a on (or onto) a nonzero vector b (also known as the vector component or vector resolution of a in the direction of b) is the orthogonal projection of a onto a straight line parallel to b. It is a vector parallel to b, defined as
:$\mathbf {a} _{1}=a_{1}\mathbf {\hat {b}} \,$ Read more... The following tables provide a comparison of numerical analysis software. Read more...
 The geometric algebra (GA) of a vector space is an algebra over a field, noted for its multiplication operation called the geometric product on a space of elements called multivectors, which is a superset of both the scalars $F$ and the vector space $V$. Mathematically, a geometric algebra may be defined as the Clifford algebra of a vector space with a quadratic form. Clifford's contribution was to define a new product, the geometric product, that united the Grassmann and Hamilton algebras into a single structure. Adding the dual of the Grassmann exterior product (the "meet") allows the use of the Grassmann–Cayley algebra, and a conformal version of the latter together with a conformal Clifford algebra yields a conformal geometric algebra (CGA) providing a framework for classical geometries. In practice, these and several derived operations allow a correspondence of elements, subspaces and operations of the algebra with geometric interpretations.
The scalars and vectors have their usual interpretation, and make up distinct subspaces of a GA. Bivectors provide a more natural representation of pseudovector quantities in vector algebra such as oriented area, oriented angle of rotation, torque, angular momentum, electromagnetic field and the Poynting vector. A trivector can represent an oriented volume, and so on. An element called a blade may be used to represent a subspace of $V$ and orthogonal projections onto that subspace. Rotations and reflections are represented as elements. Unlike vector algebra, a GA naturally accommodates any number of dimensions and any quadratic form such as in relativity. Read more...
In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higherdimensional analogues. The exterior product of two vectors u and v, denoted by u ∧ v, is called a bivector and lives in a space called the exterior square, a vector space that is distinct from the original space of vectors. The magnitude of u ∧ v can be interpreted as the area of the parallelogram with sides u and v, which in three dimensions can also be computed using the cross product of the two vectors. Like the cross product, the exterior product is anticommutative, meaning that u ∧ v = −(v ∧ u) for all vectors u and v, but, unlike the cross product, the exterior product is associative. One way to visualize a bivector is as a family of parallelograms all lying in the same plane, having the same area, and with the same orientation—a choice of clockwise or counterclockwise.
When regarded in this manner, the exterior product of two vectors is called a 2blade. More generally, the exterior product of any number k of vectors can be defined and is sometimes called a kblade. It lives in a space known as the kth exterior power. The magnitude of the resulting kblade is the volume of the kdimensional parallelotope whose edges are the given vectors, just as the magnitude of the scalar triple product of vectors in three dimensions gives the volume of the parallelepiped generated by those vectors. Read more... In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes by only a scalar factor when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V that is not the zero vector, then v is an eigenvector of T if T(v) is a scalar multiple of v. This condition can be written as the equation
:$T(\mathbf {v} )=\lambda \mathbf {v} ,$
where λ is a scalar in the field F, known as the eigenvalue, characteristic value, or characteristic root associated with the eigenvector v. Read more...  In multilinear algebra, a multivector, sometimes called Clifford number, is an element of the exterior algebra Λ(V) of a vector space V. This algebra is graded, associative and alternating, and consists of linear combinations of simple kvectors (also known as decomposable kvectors or kblades) of the form
:$v_{1}\wedge \cdots \wedge v_{k},$
where $v_{1},\ldots ,v_{k}$ are in V.
"Multivector" may mean either homogeneous elements (all terms of the linear combination have the same grade or degree k, that is are the product of the same number k of vectors), which are referred to as kvectors or pvectors, or may allow sums of terms in different degrees. Read more...  Let V be a vector space over a field F and let X be any set. The functions X → V can be given the structure of a vector space over F where the operations are defined pointwise, that is, for any f, g : X → V, any x in X, and any c in F, define
:$<br>{\begin{aligned}(f+g)(x)&=f(x)+g(x)\\<br>(c\cdot f)(x)&=c\cdot f(x)<br>\end{aligned}}$
When the domain X has additional structure, one might consider instead the subset (or subspace) of all such functions which respect that structure. For example, if X is also vector space over F, the set of linear maps X → V form a vector space over F with pointwise operations (often denoted Hom(X,V)). One such space is the dual space of V: the set of linear functionals V → F with addition and scalar multiplication defined pointwise. Read more...  Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of lowlevel routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard lowlevel routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.
It originated as a Fortran library in 1979 and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This Fortran library is known as the reference implementation (sometimes confusingly referred to as the BLAS library) and is not optimized for speed but is in the public domain. Read more...  In linear algebra, the quotient of a vector space V by a subspace N is a vector space obtained by "collapsing" N to zero. The space obtained is called a quotient space and is denoted V/N (read V mod N or V by N). Read more...
 In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions n and m, then their outer product is an n × m matrix. If the first vector is taken as a column vector, then the outer product is the matrix of columns proportional to this vector, where the proportionality of each column is a component of the second vector.
The outer product introduces tensor algebra since the outer product of two vectors $\mathbf {u}$ and $\mathbf {v}$ is their tensor product $\mathbf {u} \otimes \mathbf {v} ,$
which is the matrix $\mathbf {w}$ given by $w_{ij}=u_{i}v_{j}$. More generally, the outer product is an instance of Kronecker products. Read more...
Did you know...
 ... that Vera Faddeeva's 1950 book Computational methods of linear algebra was one of the first publications in that field of mathematics?
Need help?
Do you have a question about Linear algebra that you can't find the answer to?
Consider asking it at the Wikipedia reference desk.
Selected images
In the threedimensional Euclidean space, planes represent solutions of linear equations and their intersections represent the common solutions
Subcategories
Topics
Associated Wikimedia
The following Wikimedia Foundation sister projects provide more on this subject:
Wikibooks
Books
Commons
Media
Wikinews
News
Wikiquote
Quotations
Wikisource
Texts
Wikiversity
Learning resources
Wiktionary
Definitions
Wikidata
Database
 What are portals?
 List of portals