Improve documentation
Signed-off-by: Julius Koskela <julius.koskela@unikie.com>
This commit is contained in:
parent
1c832e8545
commit
26bd866403
@ -1,124 +1,164 @@
|
|||||||
Great, you're using a Rust library for tensor operations. Let's go through the key tensor operations with mathematical definitions and Rust code examples, tailored to your library's capabilities. The examples will assume the presence of necessary functions in your tensor library.
|
# Operations Index
|
||||||
|
|
||||||
### 1. Addition
|
## 1. Addition
|
||||||
|
|
||||||
**Mathematical Definition**: \( C = A + B \) where \( C_{ijk...} = A_{ijk...} + B_{ijk...} \) for all indices \( i, j, k, ... \).
|
Element-wize addition of two tensors.
|
||||||
|
|
||||||
**Rust Code Example**:
|
\( C = A + B \) where \( C_{ijk...} = A_{ijk...} + B_{ijk...} \) for all indices \( i, j, k, ... \).
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tensor1 = Tensor::<i32, 2>::from([1, 2, 3, 4]); // 2x2 tensor
|
let t1 = tensor!([[1, 2], [3, 4]]);
|
||||||
let tensor2 = Tensor::<i32, 2>::from([5, 6, 7, 8]); // 2x2 tensor
|
let t2 = tensor!([[5, 6], [7, 8]]);
|
||||||
let sum = tensor1.add(&tensor2); // Element-wise addition
|
let sum = t1 + t2;
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2. Subtraction
|
```sh
|
||||||
|
[[7, 8], [10, 12]]
|
||||||
|
```
|
||||||
|
|
||||||
**Mathematical Definition**: \( C = A - B \) where \( C_{ijk...} = A_{ijk...} - B_{ijk...} \).
|
## 2. Subtraction
|
||||||
|
|
||||||
**Rust Code Example**:
|
Element-wize substraction of two tensors.
|
||||||
|
|
||||||
|
\( C = A - B \) where \( C_{ijk...} = A_{ijk...} - B_{ijk...} \).
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tensor1 = Tensor::<i32, 2>::from([4, 3, 2, 1]); // 2x2 tensor
|
let t1 = tensor!([[1, 2], [3, 4]]);
|
||||||
let tensor2 = Tensor::<i32, 2>::from([1, 2, 3, 4]); // 2x2 tensor
|
let t2 = tensor!([[5, 6], [7, 8]]);
|
||||||
let difference = tensor1.sub(&tensor2); // Element-wise subtraction
|
let diff = i1 - t2;
|
||||||
```
|
```
|
||||||
|
|
||||||
### 3. Element-wise Multiplication
|
```sh
|
||||||
|
[[-4, -4], [-4, -4]]
|
||||||
|
```
|
||||||
|
|
||||||
**Mathematical Definition**: \( C = A \odot B \) where \( C_{ijk...} = A_{ijk...} \times B_{ijk...} \).
|
## 3. Multiplication
|
||||||
|
|
||||||
**Rust Code Example**:
|
Element-wize multiplication of two tensors.
|
||||||
|
|
||||||
|
\( C = A \odot B \) where \( C_{ijk...} = A_{ijk...} \times B_{ijk...} \).
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tensor1 = Tensor::<i32, 2>::from([1, 2, 3, 4]); // 2x2 tensor
|
let t1 = tensor!([[1, 2], [3, 4]]);
|
||||||
let tensor2 = Tensor::<i32, 2>::from([5, 6, 7, 8]); // 2x2 tensor
|
let t2 = tensor!([[5, 6], [7, 8]]);
|
||||||
let product = tensor1.mul(&tensor2); // Element-wise multiplication
|
let prod = t1 * t2;
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4. Tensor Contraction (Matrix Product)
|
```sh
|
||||||
|
[[5, 12], [21, 32]]
|
||||||
|
```
|
||||||
|
|
||||||
**Mathematical Definition**: For matrices \( A \) and \( B \), \( C = AB \) where \( C_{ij} = \sum_k A_{ik} B_{kj} \).
|
## 4. Division
|
||||||
|
|
||||||
**Rust Code Example**: Your provided example already demonstrates tensor contraction (though it's more of a tensor product given the dimensions).
|
Element-wize division of two tensors.
|
||||||
|
|
||||||
### 5. Element-wise Division
|
\( C = A \div B \) where \( C_{ijk...} = A_{ijk...} \div B_{ijk...} \).
|
||||||
|
|
||||||
**Mathematical Definition**: \( C = A \div B \) where \( C_{ijk...} = A_{ijk...} \div B_{ijk...} \).
|
|
||||||
|
|
||||||
**Rust Code Example**:
|
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tensor1 = Tensor::<i32, 2>::from([10, 20, 30, 40]); // 2x2 tensor
|
let t1 = tensor!([[1, 2], [3, 4]]);
|
||||||
let tensor2 = Tensor::<i32, 2>::from([2, 2, 5, 5]); // 2x2 tensor
|
let t2 = tensor!([[1, 2], [3, 4]]);
|
||||||
let quotient = tensor1.div(&tensor2); // Element-wise division
|
let quot = t1 / t2;
|
||||||
```
|
```
|
||||||
|
|
||||||
### 6. Reduction Operations (e.g., Sum)
|
```sh
|
||||||
|
[[1, 1], [1, 1]]
|
||||||
|
```
|
||||||
|
|
||||||
**Mathematical Definition**: \( \text{sum}(A) \) where sum over all elements of A.
|
## 5. Contraction
|
||||||
|
|
||||||
**Rust Code Example**:
|
Contract two tensors over given axes.
|
||||||
|
|
||||||
|
For matrices \( A \) and \( B \), \( C = AB \) where \( C_{ij} = \sum_k A_{ik} B_{kj} \).
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tensor = Tensor::<i32, 2>::from([1, 2, 3, 4]); // 2x2 tensor
|
let t1 = tensor!([[1, 2], [3, 4], [5, 6]]);
|
||||||
let total_sum = tensor.sum(); // Sum of all elements
|
let t2 = tensor!([[1, 2, 3], [4, 5, 6]]);
|
||||||
|
|
||||||
|
let cont = contract((t1, [1]), (t2, [0]));
|
||||||
```
|
```
|
||||||
|
|
||||||
### 7. Broadcasting
|
```sh
|
||||||
|
TODO!
|
||||||
|
```
|
||||||
|
|
||||||
**Description**: Adjusts tensors with different shapes to make them compatible for element-wise operations.
|
## 6. Reduction (e.g., Sum)
|
||||||
|
|
||||||
**Rust Code Example**: Depends on whether your library supports broadcasting. If it does, it will handle shape adjustments automatically during operations like addition or multiplication.
|
\( \text{sum}(A) \) where sum over all elements of A.
|
||||||
|
|
||||||
### 8. Reshape
|
|
||||||
|
|
||||||
**Mathematical Definition**: Changing the shape of a tensor without altering its data.
|
|
||||||
|
|
||||||
**Rust Code Example**:
|
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tensor = Tensor::<i32, 1>::from([1, 2, 3, 4, 5, 6]); // 6-element vector
|
let t1 = tensor!([[1, 2], [3, 4]]);
|
||||||
let reshaped_tensor = tensor.reshape([2, 3]); // Reshape to 2x3 tensor
|
let total = t1.sum();
|
||||||
```
|
```
|
||||||
|
|
||||||
### 9. Transpose
|
```sh
|
||||||
|
10
|
||||||
|
```
|
||||||
|
|
||||||
**Mathematical Definition**: \( B = A^T \) where \( B_{ij} = A_{ji} \).
|
## 7. Broadcasting
|
||||||
|
|
||||||
**Rust Code Example**:
|
Adjusts tensors with different shapes to make them compatible for element-wise operations automatically
|
||||||
|
when using supported functions.
|
||||||
|
|
||||||
|
## 8. Reshape
|
||||||
|
|
||||||
|
Changing the shape of a tensor without altering its data.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tensor = Tensor::<i32, 2>::from([1, 2, 3, 4]); // 2x2 tensor
|
let t1 = tensor!([1, 2, 3, 4, 5, 6]);
|
||||||
let transposed_tensor = tensor.transpose(); // Transpose the tensor
|
let tr = t1.reshape([2, 3]);
|
||||||
```
|
```
|
||||||
|
|
||||||
### 10. Concatenation
|
```sh
|
||||||
|
[[1, 2, 3], [4, 5, 6]]
|
||||||
|
```
|
||||||
|
|
||||||
**Description**: Joining tensors along a specified dimension.
|
## 9. Transpose
|
||||||
|
|
||||||
**Rust Code Example**:
|
Transpose a tensor over given axes.
|
||||||
|
|
||||||
|
\( B = A^T \) where \( B_{ij} = A_{ji} \).
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tensor1 = Tensor::<i32, 1>::from([1, 2, 3]); // 3-element vector
|
let t1 = tensor!([1, 2, 3, 4]);
|
||||||
let tensor2 = Tensor::<i32, 1>::from([4, 5, 6]); // 3-element vector
|
let transposed = t1.transpose();
|
||||||
let concatenated_tensor = tensor1.concat(&tensor2, 0); // Concatenate along dimension 0
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 11. Slicing and Indexing
|
```sh
|
||||||
|
TODO!
|
||||||
|
```
|
||||||
|
|
||||||
**Description**: Extracting parts of tensors based on indices.
|
## 10. Concatenation
|
||||||
|
|
||||||
**Rust Code Example**:
|
Joining tensors along a specified dimension.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tensor = Tensor::<i32, 2>::from([1, 2, 3, 4, 5, 6]); // 2x3 tensor
|
let t1 = tensor!([1, 2, 3]);
|
||||||
let slice = tensor.slice(s![1, ..]); // Slice to get the second row
|
let t2 = tensor!([4, 5, 6]);
|
||||||
|
let cat = t1.concat(&t2, 0);
|
||||||
```
|
```
|
||||||
|
|
||||||
### 12. Element-wise Functions (e.g., Sigmoid)
|
```sh
|
||||||
|
TODO!
|
||||||
|
```
|
||||||
|
|
||||||
**Mathematical Definition**: Applying a function to each element of a tensor, like \( \sigma(x) = \frac{1}{1 + e^{-x}} \) for sigmoid.
|
## 11. Slicing and Indexing
|
||||||
|
|
||||||
|
Extracting parts of tensors based on indices.
|
||||||
|
|
||||||
|
```rust
|
||||||
|
let t1 = tensor!([1, 2, 3, 4, 5, 6]);
|
||||||
|
let slice = t1.slice(s![1, ..]);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sh
|
||||||
|
TODO!
|
||||||
|
```
|
||||||
|
|
||||||
|
## 12. Element-wise Functions (e.g., Sigmoid)
|
||||||
|
|
||||||
|
**Mathematical Definition**:
|
||||||
|
|
||||||
|
Applying a function to each element of a tensor, like \( \sigma(x) = \frac{1}{1 + e^{-x}} \) for sigmoid.
|
||||||
|
|
||||||
**Rust Code Example**:
|
**Rust Code Example**:
|
||||||
|
|
||||||
@ -127,38 +167,40 @@ let tensor = Tensor::<f32, 2>::from([-1.0, 0.0, 1.0, 2.0]); // 2x2 tensor
|
|||||||
let sigmoid_tensor = tensor.map(|x| 1.0 / (1.0 + (-x).exp())); // Apply sigmoid element-wise
|
let sigmoid_tensor = tensor.map(|x| 1.0 / (1.0 + (-x).exp())); // Apply sigmoid element-wise
|
||||||
```
|
```
|
||||||
|
|
||||||
### 13. Gradient Computation/Automatic Differentiation
|
## 13. Gradient Computation/Automatic Differentiation
|
||||||
|
|
||||||
**Description**: Calculating the derivatives of tensors, crucial for training machine learning models.
|
**Description**:
|
||||||
|
|
||||||
|
Calculating the derivatives of tensors, crucial for training machine learning models.
|
||||||
|
|
||||||
**Rust Code Example**: Depends on if your tensor library supports automatic differentiation. This is typically more complex and may involve constructing computational graphs.
|
**Rust Code Example**: Depends on if your tensor library supports automatic differentiation. This is typically more complex and may involve constructing computational graphs.
|
||||||
|
|
||||||
### 14. Normalization Operations (e.g., Batch Normalization)
|
## 14. Normalization Operations (e.g., Batch Normalization)
|
||||||
|
|
||||||
**Description**: Standardizing the inputs of a model across the batch dimension.
|
**Description**: Standardizing the inputs of a model across the batch dimension.
|
||||||
|
|
||||||
**Rust Code Example**: This is specific to deep learning libraries and may not be directly supported in a general-purpose tensor library.
|
**Rust Code Example**: This is specific to deep learning libraries and may not be directly supported in a general-purpose tensor library.
|
||||||
|
|
||||||
### 15. Convolution Operations
|
## 15. Convolution Operations
|
||||||
|
|
||||||
**Description**: Essential for image processing and CNNs.
|
**Description**: Essential for image processing and CNNs.
|
||||||
|
|
||||||
**Rust Code Example**: If your library supports it, convolutions typically involve using a specialized function that takes the input tensor and a kernel tensor.
|
**Rust Code Example**: If your library supports it, convolutions typically involve using a specialized function that takes the input tensor and a kernel tensor.
|
||||||
|
|
||||||
### 16. Pooling Operations (e.g., Max Pooling)
|
## 16. Pooling Operations (e.g., Max Pooling)
|
||||||
|
|
||||||
**Description**: Reducing the spatial dimensions of
|
**Description**: Reducing the spatial dimensions of
|
||||||
a tensor, commonly used in CNNs.
|
a tensor, commonly used in CNNs.
|
||||||
|
|
||||||
**Rust Code Example**: Again, this depends on your library's support for such operations.
|
**Rust Code Example**: Again, this depends on your library's support for such operations.
|
||||||
|
|
||||||
### 17. Tensor Slicing and Joining
|
## 17. Tensor Slicing and Joining
|
||||||
|
|
||||||
**Description**: Operations to slice a tensor into sub-tensors or join multiple tensors into a larger tensor.
|
**Description**: Operations to slice a tensor into sub-tensors or join multiple tensors into a larger tensor.
|
||||||
|
|
||||||
**Rust Code Example**: Similar to the slicing and concatenation examples provided above.
|
**Rust Code Example**: Similar to the slicing and concatenation examples provided above.
|
||||||
|
|
||||||
### 18. Dimension Permutation
|
## 18. Dimension Permutation
|
||||||
|
|
||||||
**Description**: Rearranging the dimensions of a tensor.
|
**Description**: Rearranging the dimensions of a tensor.
|
||||||
|
|
||||||
@ -169,13 +211,13 @@ let tensor = Tensor::<i32, 3>::from([...]); // 3D tensor
|
|||||||
let permuted_tensor = tensor.permute_dims([2, 0, 1]); // Permute dimensions
|
let permuted_tensor = tensor.permute_dims([2, 0, 1]); // Permute dimensions
|
||||||
```
|
```
|
||||||
|
|
||||||
### 19. Expand and Squeeze Operations
|
## 19. Expand and Squeeze Operations
|
||||||
|
|
||||||
**Description**: Increasing or decreasing the dimensions of a tensor (adding/removing singleton dimensions).
|
**Description**: Increasing or decreasing the dimensions of a tensor (adding/removing singleton dimensions).
|
||||||
|
|
||||||
**Rust Code Example**: Depends on the specific functions provided by your library.
|
**Rust Code Example**: Depends on the specific functions provided by your library.
|
||||||
|
|
||||||
### 20. Data Type Conversions
|
## 20. Data Type Conversions
|
||||||
|
|
||||||
**Description**: Converting tensors from one data type to another.
|
**Description**: Converting tensors from one data type to another.
|
||||||
|
|
||||||
@ -187,3 +229,11 @@ let converted_tensor = tensor.to_type::<f32>(); // Convert to f32 tensor
|
|||||||
```
|
```
|
||||||
|
|
||||||
These examples provide a general guide. The actual implementation details may vary depending on the specific features and capabilities of the Rust tensor library you're using.
|
These examples provide a general guide. The actual implementation details may vary depending on the specific features and capabilities of the Rust tensor library you're using.
|
||||||
|
|
||||||
|
## 21. Tensor Decompositions
|
||||||
|
|
||||||
|
**CANDECOMP/PARAFAC (CP) Decomposition**: This decomposes a tensor into a sum of component rank-one tensors. For a third-order tensor, it's like expressing it as a sum of outer products of vectors. This is useful in applications like signal processing, psychometrics, and chemometrics.
|
||||||
|
|
||||||
|
**Tucker Decomposition**: Similar to PCA for matrices, Tucker Decomposition decomposes a tensor into a core tensor multiplied by a matrix along each mode (dimension). It's more general than CP Decomposition and is useful in areas like data compression and tensor completion.
|
||||||
|
|
||||||
|
**Higher-Order Singular Value Decomposition (HOSVD)**: A generalization of SVD for higher-order tensors, HOSVD decomposes a tensor into a core tensor and a set of orthogonal matrices for each mode. It's used in image processing, computer vision, and multilinear subspace learning.
|
||||||
|
Loading…
Reference in New Issue
Block a user