Transformação multilinear: diferenças entre revisões

Conteúdo apagado Conteúdo adicionado
nova página: Em álgebra linear, um '''mapa multilinear''' é uma função de diversas variáveis que é linear em cada variável separadamente. Mais precisamente,...
Linha 1:
Em [[álgebra linear]], umaum '''transformaçãomapa multilinear''' é uma [[funçãoFunção (matemática)|função]] de váriasdiversas variáveis que é linear separadamente em cada variável separadamente. Mais precisamente, umaum transformaçãomapa multilinear é uma função
 
:<math>f\colon V_1 \times \cdots \times V_n \to W\text{,}</math>
 
onde <math>V_1,\ldots,V_n</math> e <math>W\!</math> são [[Espaço vetorial|espaços vetoriais]]s (ou [[móduloMódulo (álgebra)|módulos]] sobre um [[anel comutativo]]), com a seguinte propriedade: para cada <math>i\!</math>, se todas as variáveis, mas <math>v_i\!</math> são mantidas constantes, então <math>f(v_1,\ldots,v_n)</math> é umaum [[Transformação linear|função linear]] de <math>v_i\!</math>.<ref>Lang. Algebra. Springer; 3rd edition (January 8, 2002)</ref>
 
Uma transformação multilinear de duas variáve é uma [[transformação bilinear]]. Mais genericamente, uma transformação multilinear de ''k'' variáveis é chamada uma '''transformação ''k''-linear'''. Se o [[codomínio]] de uma transformação multilinear é o [[Corpo (matemática)|corpo]] de escalares, ele é chamado de uma [[forma multilinear]]. Transformações multilineares e formas multilineares são objetos fundamentais em [[álgebra multilinear]].
 
Se todas as variáveis pertencem ao mesmo espaço, podem ser consideradas transformações ''k''-lineares [[função simétrica|simétricas]],
[[Álgebra linear|antissimétricas]] e [[Transformação alternadas|alternadas]]. As últimas coincidem, se o [[anel (matemática)|anel]] (ou corpo) subjacente tem uma característica diferente de dois, outra as duas primeiras coincidem.
 
== Exemplos ==
 
<!--
 
* Any [[bilinear map]] is a multilinear map. For example, any [[inner product]] on a vector space is a multilinear map, as is the [[cross product]] of vectors in <math>\mathbb{R}^3</math>.
* The [[determinant]] of a matrix is an [[Antisymmetric matrix|antisymmetric]] multilinear function of the columns (or rows) of a [[square matrix]].
* If <math>F\colon \mathbb{R}^m \to \mathbb{R}^n</math> is a [[smooth function|''C<sup>k</sup>'' function]], then the <math>k\!</math>th derivative of <math>F\!</math> at each point <math>p\!</math> in its domain can be viewed as a [[symmetric function|symmetric]] <math>k\!</math>-linear function <math>D^k\!f(p)\colon \mathbb{R}^m\times\cdots\times\mathbb{R}^m \to \mathbb{R}^n</math>.
* The [[Multilinear subspace learning#Tensor-to-vector projection .28TVP.29|tensor-to-vector projection]] in [[multilinear subspace learning]] is a multilinear map as well.
 
==Coordinate representation==
Let
:<math>f\colon V_1 \times \cdots \times V_n \to W\text{,}</math>
be a multilinear map between finite-dimensional vector spaces, where <math>V_i\!</math> has dimension <math>d_i\!</math>, and <math>W\!</math> has dimension <math>d\!</math>. If we choose a [[basis (linear algebra)|basis]] <math>\{\textbf{e}_{i1},\ldots,\textbf{e}_{id_i}\}</math> for each <math>V_i\!</math> and a basis <math>\{\textbf{b}_1,\ldots,\textbf{b}_d\}</math> for <math>W\!</math> (using bold for vectors), then we can define a collection of scalars <math>A_{j_1\cdots j_n}^k</math> by
:<math>f(\textbf{e}_{1j_1},\ldots,\textbf{e}_{nj_n}) = A_{j_1\cdots j_n}^1\,\textbf{b}_1 + \cdots + A_{j_1\cdots j_n}^d\,\textbf{b}_d.</math>
Then the scalars <math>\{A_{j_1\cdots j_n}^k \mid 1\leq j_i\leq d_i, 1 \leq k \leq d\}</math> completely determine the multilinear function <math>f\!</math>. In particular, if
:<math>\textbf{v}_i = \sum_{j=1}^{d_i} v_{ij} \textbf{e}_{ij}\!</math>
for <math>1 \leq i \leq n\!</math>, then
:<math>f(\textbf{v}_1,\ldots,\textbf{v}_n) = \sum_{j_1=1}^{d_1} \cdots \sum_{j_n=1}^{d_n} \sum_{k=1}^{d} A_{j_1\cdots j_n}^k v_{1j_1}\cdots v_{nj_n} \textbf{b}_k.</math>
 
==Example==
Let's take a trilinear function:
:<math>f\colon R^2 \times R^2 \times R^2 \to R </math>
<math>V_i = R^2, d_i = 2</math>, i = 1,2,3, and <math>W=R, d=1</math>.
Basis of all <math>V_i</math> is equal: <math>\{\textbf{e}_{i1},\ldots,\textbf{e}_{id_i}\} = \{\textbf{e}_{1}, \textbf{e}_{2}\} = \{(1,0), (0,1)\}</math>. Then denote:
 
:<math>f(\textbf{e}_{1i},\textbf{e}_{2j},\textbf{e}_{3k}) = f(\textbf{e}_{i},\textbf{e}_{j},\textbf{e}_{k}) = A_{ijk}</math>, where <math>i,j,k \in \{1,2\}</math>. In other words, the constant <math>A_{i j k}</math> means a function value at one of 8 possible combinations of basis vectors, one per each <math>V_i</math>:
<math>
\{\textbf{e}_1, \textbf{e}_1, \textbf{e}_1\},
\{\textbf{e}_1, \textbf{e}_1, \textbf{e}_2\},
\{\textbf{e}_1, \textbf{e}_2, \textbf{e}_1\},
\{\textbf{e}_1, \textbf{e}_2, \textbf{e}_2\},
\{\textbf{e}_2, \textbf{e}_1, \textbf{e}_1\},
\{\textbf{e}_2, \textbf{e}_1, \textbf{e}_2\},
\{\textbf{e}_2, \textbf{e}_2, \textbf{e}_1\},
\{\textbf{e}_2, \textbf{e}_2, \textbf{e}_2\},
</math>.
 
Each vector <math>\textbf{v}_i \in V_i = R^2</math> can be expressed as a linear combination of the basis vectors:
:<math>\textbf{v}_i = \sum_{j=1}^{2} v_{ij} \textbf{e}_{ij} = v_{i1} \times \textbf{e}_1 + v_{i2} \times \textbf{e}_2 = v_{i1} \times (1, 0) + v_{i2} \times (0, 1)\!</math>
 
The function value at an arbitrary collection of 3 vectors <math>\textbf{v}_i \in R^2</math> can be expressed:
:<math>f(\textbf{v}_1,\textbf{v}_2, \textbf{v}_3) = \sum_{i=1}^{2} \sum_{j=1}^{2} \sum_{k=1}^{2} A_{i j k} v_{1i} v_{2j} v_{3k}</math>.
:<math>f((a,b),(c,d), (e,f)) =
ace \times f(\textbf{e}_1, \textbf{e}_1, \textbf{e}_1) +
acf \times f(\textbf{e}_1, \textbf{e}_1, \textbf{e}_2) +
ade \times f(\textbf{e}_1, \textbf{e}_2, \textbf{e}_1) +
adf \times f(\textbf{e}_1, \textbf{e}_2, \textbf{e}_2) +
bce \times f(\textbf{e}_2, \textbf{e}_1, \textbf{e}_1) +
bcf \times f(\textbf{e}_2, \textbf{e}_1, \textbf{e}_2) +
bde \times f(\textbf{e}_2, \textbf{e}_2, \textbf{e}_1) +
bdf \times f(\textbf{e}_2, \textbf{e}_2, \textbf{e}_2)
</math>.
 
==Relation to tensor products==
There is a natural one-to-one correspondence between multilinear maps
:<math>f\colon V_1 \times \cdots \times V_n \to W\text{,}</math>
and linear maps
:<math>F\colon V_1 \otimes \cdots \otimes V_n \to W\text{,}</math>
where <math>V_1 \otimes \cdots \otimes V_n\!</math> denotes the [[tensor product]] of <math>V_1,\ldots,V_n</math>. The relation between the functions <math>f\!</math> and <math>F\!</math> is given by the formula
:<math>F(v_1\otimes \cdots \otimes v_n) = f(v_1,\ldots,v_n).</math>
 
==Multilinear functions on ''n''&times;''n'' matrices==
 
One can consider multilinear functions, on an ''n''&times;''n'' matrix over a [[commutative ring]] K with identity, as a function of the rows (or equivalently the columns) of the matrix. Let ''A'' be such a matrix and <math>a_i</math>, 1 ≤ ''i'' ≤ ''n'' be the rows of ''A''. Then the multilinear function ''D'' can be written as
 
:<math>D(A) = D(a_{1},\ldots,a_{n}) \,</math>
 
satisfying
:<math>D(a_{1},\ldots,c a_{i} + a_{i}',\ldots,a_{n}) = c D(a_{1},\ldots,a_{i},\ldots,a_{n}) + D(a_{1},\ldots,a_{i}',\ldots,a_{n}) \,</math>
 
If we let <math>\hat{e}_j</math> represent the jth row of the identity matrix we can express each row <math>a_{i}</math> as the sum
 
:<math>a_{i} = \sum_{j=1}^n A(i,j)\hat{e}_{j}</math>
 
Using the multilinearity of ''D'' we rewrite ''D''(''A'') as
 
:<math>
D(A) = D\left(\sum_{j=1}^n A(1,j)\hat{e}_{j}, a_2, \ldots, a_n\right)
= \sum_{j=1}^n A(1,j) D(\hat{e}_{j},a_2,\ldots,a_n)
</math>
 
Continuing this substitution for each <math>a_i</math> we get, for 1 ≤ ''i'' ≤ ''n''
 
:<math>
D(A) = \sum_{1\le k_i\le n} A(1,k_{1})A(2,k_{2})\dots A(n,k_{n}) D(\hat{e}_{k_{1}},\dots,\hat{e}_{k_{n}})
</math>
 
:where, since in our case <math> 1 \le i \le n </math>
::<math>
\sum_{1\le k_i \le n} = \sum_{1\le k_1 \le n} \ldots \sum_{1\le k_i \le n} \ldots \sum_{1\le k_n \le n} \,
</math>
:as a series of nested summations.
 
Therefore, D(A) is uniquely determined by how <math>D</math> operates on <math>\hat{e}_{k_{1}},\dots,\hat{e}_{k_{n}}</math>.
 
==Example==
In the case of 2&times;2 matrices we get
 
:<math>
D(A) = A_{1,1}A_{2,1}D(\hat{e}_1,\hat{e}_1) + A_{1,1}A_{2,2}D(\hat{e}_1,\hat{e}_2) + A_{1,2}A_{2,1}D(\hat{e}_2,\hat{e}_1) + A_{1,2}A_{2,2}D(\hat{e}_2,\hat{e}_2) \,
</math>
 
Where <math>\hat{e}_1 = [1,0]</math> and <math>\hat{e}_2 = [0,1]</math>. If we restrict D to be an alternating function then <math>D(\hat{e}_1,\hat{e}_1) = D(\hat{e}_2,\hat{e}_2) = 0</math> and <math>D(\hat{e}_2,\hat{e}_1) = -D(\hat{e}_1,\hat{e}_2) = -D(I)</math>. Letting <math>D(I) = 1</math> we get the determinant function on ''2''&times;''2'' matrices:
 
:<math>
D(A) = A_{1,1}A_{2,2} - A_{1,2}A_{2,1} \,
</math>
 
==Properties==
 
A multilinear map has a value of zero whenever one of its arguments is zero.
 
For ''n''>1, the only ''n''-linear map which is also a linear map is the [[zero function]], see [[bilinear map#Examples]].
 
==See also==
* [[Algebraic form]]
* [[Multilinear form]]
* [[Homogeneous polynomial]]
* [[Homogeneous function]]
* [[Tensor]]s
* [[Multilinear subspace learning#Multilinear projection|Multilinear projection]]
* [[Multilinear subspace learning]]
 
-->
 
{{Em tradução}}
 
{{referências}}