Delete WIP examples and docs

Signed-off-by: Julius Koskela <julius.koskela@unikie.com>
This commit is contained in:
Julius Koskela 2024-01-03 17:13:24 +02:00
parent 46a466eca8
commit f14892f0ef
Signed by: julius
GPG Key ID: 5A7B7F4897C2914B
3 changed files with 0 additions and 367 deletions

View File

@ -1,34 +0,0 @@
To understand how the tensor contraction should work for the given tensors `a` and `b`, let's first clarify their shapes and then walk through the contraction steps:
1. **Tensor Shapes**:
- Tensor `a` is a 2x3 matrix (3 rows and 2 columns): \[\begin{matrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{matrix}\]
- Tensor `b` is a 3x2 matrix (2 rows and 3 columns): \[\begin{matrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{matrix}\]
2. **Tensor Contraction Operation**:
- The contraction operation in this case involves multiplying corresponding elements along the shared dimension (the second dimension of `a` and the first dimension of `b`) and summing the results.
- The resulting tensor will have the shape determined by the other dimensions of the original tensors, which in this case is 3x3.
3. **Contraction Steps**:
- Step 1: Multiply each element of the first row of `a` with each element of the first column of `b`, then sum these products. This forms the first element of the resulting matrix.
- \( (1 \times 1) + (2 \times 4) = 1 + 8 = 9 \)
- Step 2: Multiply each element of the first row of `a` with each element of the second column of `b`, then sum these products. This forms the second element of the first row of the resulting matrix.
- \( (1 \times 2) + (2 \times 5) = 2 + 10 = 12 \)
- Step 3: Multiply each element of the first row of `a` with each element of the third column of `b`, then sum these products. This forms the third element of the first row of the resulting matrix.
- \( (1 \times 3) + (2 \times 6) = 3 + 12 = 15 \)
- Continue this process for the remaining rows of `a` and columns of `b`:
- For the second row of `a`:
- \( (3 \times 1) + (4 \times 4) = 3 + 16 = 19 \)
- \( (3 \times 2) + (4 \times 5) = 6 + 20 = 26 \)
- \( (3 \times 3) + (4 \times 6) = 9 + 24 = 33 \)
- For the third row of `a`:
- \( (5 \times 1) + (6 \times 4) = 5 + 24 = 29 \)
- \( (5 \times 2) + (6 \times 5) = 10 + 30 = 40 \)
- \( (5 \times 3) + (6 \times 6) = 15 + 36 = 51 \)
4. **Resulting Tensor**:
- The resulting 3x3 tensor from the contraction of `a` and `b` will be:
\[\begin{matrix} 9 & 12 & 15 \\ 19 & 26 & 33 \\ 29 & 40 & 51 \end{matrix}\]
These steps provide the detailed calculations for each element of the resulting tensor after contracting tensors `a` and `b`.

View File

@ -1,239 +0,0 @@
# Operations Index
## 1. Addition
Element-wize addition of two tensors.
\( C = A + B \) where \( C_{ijk...} = A_{ijk...} + B_{ijk...} \) for all indices \( i, j, k, ... \).
```rust
let t1 = tensor!([[1, 2], [3, 4]]);
let t2 = tensor!([[5, 6], [7, 8]]);
let sum = t1 + t2;
```
```sh
[[7, 8], [10, 12]]
```
## 2. Subtraction
Element-wize substraction of two tensors.
\( C = A - B \) where \( C_{ijk...} = A_{ijk...} - B_{ijk...} \).
```rust
let t1 = tensor!([[1, 2], [3, 4]]);
let t2 = tensor!([[5, 6], [7, 8]]);
let diff = i1 - t2;
```
```sh
[[-4, -4], [-4, -4]]
```
## 3. Multiplication
Element-wize multiplication of two tensors.
\( C = A \odot B \) where \( C_{ijk...} = A_{ijk...} \times B_{ijk...} \).
```rust
let t1 = tensor!([[1, 2], [3, 4]]);
let t2 = tensor!([[5, 6], [7, 8]]);
let prod = t1 * t2;
```
```sh
[[5, 12], [21, 32]]
```
## 4. Division
Element-wize division of two tensors.
\( C = A \div B \) where \( C_{ijk...} = A_{ijk...} \div B_{ijk...} \).
```rust
let t1 = tensor!([[1, 2], [3, 4]]);
let t2 = tensor!([[1, 2], [3, 4]]);
let quot = t1 / t2;
```
```sh
[[1, 1], [1, 1]]
```
## 5. Contraction
Contract two tensors over given axes.
For matrices \( A \) and \( B \), \( C = AB \) where \( C_{ij} = \sum_k A_{ik} B_{kj} \).
```rust
let t1 = tensor!([[1, 2], [3, 4], [5, 6]]);
let t2 = tensor!([[1, 2, 3], [4, 5, 6]]);
let cont = contract((t1, [1]), (t2, [0]));
```
```sh
TODO!
```
## 6. Reduction (e.g., Sum)
\( \text{sum}(A) \) where sum over all elements of A.
```rust
let t1 = tensor!([[1, 2], [3, 4]]);
let total = t1.sum();
```
```sh
10
```
## 7. Broadcasting
Adjusts tensors with different shapes to make them compatible for element-wise operations automatically
when using supported functions.
## 8. Reshape
Changing the shape of a tensor without altering its data.
```rust
let t1 = tensor!([1, 2, 3, 4, 5, 6]);
let tr = t1.reshape([2, 3]);
```
```sh
[[1, 2, 3], [4, 5, 6]]
```
## 9. Transpose
Transpose a tensor over given axes.
\( B = A^T \) where \( B_{ij} = A_{ji} \).
```rust
let t1 = tensor!([1, 2, 3, 4]);
let transposed = t1.transpose();
```
```sh
TODO!
```
## 10. Concatenation
Joining tensors along a specified dimension.
```rust
let t1 = tensor!([1, 2, 3]);
let t2 = tensor!([4, 5, 6]);
let cat = t1.concat(&t2, 0);
```
```sh
TODO!
```
## 11. Slicing and Indexing
Extracting parts of tensors based on indices.
```rust
let t1 = tensor!([1, 2, 3, 4, 5, 6]);
let slice = t1.slice(s![1, ..]);
```
```sh
TODO!
```
## 12. Element-wise Functions (e.g., Sigmoid)
**Mathematical Definition**:
Applying a function to each element of a tensor, like \( \sigma(x) = \frac{1}{1 + e^{-x}} \) for sigmoid.
**Rust Code Example**:
```rust
let tensor = Tensor::<f32, 2>::from([-1.0, 0.0, 1.0, 2.0]); // 2x2 tensor
let sigmoid_tensor = tensor.map(|x| 1.0 / (1.0 + (-x).exp())); // Apply sigmoid element-wise
```
## 13. Gradient Computation/Automatic Differentiation
**Description**:
Calculating the derivatives of tensors, crucial for training machine learning models.
**Rust Code Example**: Depends on if your tensor library supports automatic differentiation. This is typically more complex and may involve constructing computational graphs.
## 14. Normalization Operations (e.g., Batch Normalization)
**Description**: Standardizing the inputs of a model across the batch dimension.
**Rust Code Example**: This is specific to deep learning libraries and may not be directly supported in a general-purpose tensor library.
## 15. Convolution Operations
**Description**: Essential for image processing and CNNs.
**Rust Code Example**: If your library supports it, convolutions typically involve using a specialized function that takes the input tensor and a kernel tensor.
## 16. Pooling Operations (e.g., Max Pooling)
**Description**: Reducing the spatial dimensions of
a tensor, commonly used in CNNs.
**Rust Code Example**: Again, this depends on your library's support for such operations.
## 17. Tensor Slicing and Joining
**Description**: Operations to slice a tensor into sub-tensors or join multiple tensors into a larger tensor.
**Rust Code Example**: Similar to the slicing and concatenation examples provided above.
## 18. Dimension Permutation
**Description**: Rearranging the dimensions of a tensor.
**Rust Code Example**:
```rust
let tensor = Tensor::<i32, 3>::from([...]); // 3D tensor
let permuted_tensor = tensor.permute_dims([2, 0, 1]); // Permute dimensions
```
## 19. Expand and Squeeze Operations
**Description**: Increasing or decreasing the dimensions of a tensor (adding/removing singleton dimensions).
**Rust Code Example**: Depends on the specific functions provided by your library.
## 20. Data Type Conversions
**Description**: Converting tensors from one data type to another.
**Rust Code Example**:
```rust
let tensor = Tensor::<i32, 2>::from([1, 2, 3, 4]); // 2x2 tensor
let converted_tensor = tensor.to_type::<f32>(); // Convert to f32 tensor
```
These examples provide a general guide. The actual implementation details may vary depending on the specific features and capabilities of the Rust tensor library you're using.
## 21. Tensor Decompositions
**CANDECOMP/PARAFAC (CP) Decomposition**: This decomposes a tensor into a sum of component rank-one tensors. For a third-order tensor, it's like expressing it as a sum of outer products of vectors. This is useful in applications like signal processing, psychometrics, and chemometrics.
**Tucker Decomposition**: Similar to PCA for matrices, Tucker Decomposition decomposes a tensor into a core tensor multiplied by a matrix along each mode (dimension). It's more general than CP Decomposition and is useful in areas like data compression and tensor completion.
**Higher-Order Singular Value Decomposition (HOSVD)**: A generalization of SVD for higher-order tensors, HOSVD decomposes a tensor into a core tensor and a set of orthogonal matrices for each mode. It's used in image processing, computer vision, and multilinear subspace learning.

View File

@ -1,94 +0,0 @@
#![allow(mixed_script_confusables)]
#![allow(non_snake_case)]
use bytemuck::cast_slice;
use manifold::contract;
use manifold::*;
fn tensor_product() {
println!("Tensor Product\n");
let mut tensor1 = Tensor::<i32, 2>::from([[2], [2]]); // 2x2 tensor
let mut tensor2 = Tensor::<i32, 1>::from([2]); // 2-element vector
// Fill tensors with some values
tensor1.buffer_mut().copy_from_slice(&[1, 2, 3, 4]);
tensor2.buffer_mut().copy_from_slice(&[5, 6]);
println!("T1: {}", tensor1);
println!("T2: {}", tensor2);
let product = tensor1.tensor_product(&tensor2);
println!("T1 * T2 = {}", product);
// Check shape of the resulting tensor
assert_eq!(product.shape(), &Shape::new([2, 2, 2]));
// Check buffer of the resulting tensor
let expect: &[i32] =
cast_slice(&[[[5, 6], [10, 12]], [[15, 18], [20, 24]]]);
assert_eq!(product.buffer(), expect);
}
fn test_tensor_contraction_23x32() {
// Define two 2D tensors (matrices)
// Tensor A is 2x3
let a: Tensor<i32, 2> = Tensor::from([[1, 2, 3], [4, 5, 6]]);
println!("a: {:?}\n{}\n", a.shape(), a);
// Tensor B is 3x2
let b: Tensor<i32, 2> = Tensor::from([[1, 2], [3, 4], [5, 6]]);
println!("b: {:?}\n{}\n", b.shape(), b);
// Contract over the last axis of A (axis 1) and the first axis of B (axis 0)
let ctr10 = contract((&a, [1]), (&b, [0]));
println!("[1, 0]: {:?}\n{}\n", ctr10.shape(), ctr10);
let ctr01 = contract((&a, [0]), (&b, [1]));
println!("[0, 1]: {:?}\n{}\n", ctr01.shape(), ctr01);
// assert_eq!(contracted_tensor.shape(), &Shape::new([3, 3]));
// assert_eq!(
// contracted_tensor.buffer(),
// &[9, 12, 15, 19, 26, 33, 29, 40, 51],
// "Contracted tensor buffer does not match expected"
// );
}
fn test_tensor_contraction_rank3() {
let a = tensor!([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]);
let b = tensor!([[[9, 10], [11, 12]], [[13, 14], [15, 16]]]);
let contracted_tensor = contract((&a, [2]), (&b, [0]));
println!("a: {}", a);
println!("b: {}", b);
println!("contracted_tensor: {}", contracted_tensor);
// assert_eq!(contracted_tensor.shape(), &[2, 4, 3, 2]);
// Verify specific elements of contracted_tensor
// assert_eq!(contracted_tensor[0][0][0][0], 50);
// assert_eq!(contracted_tensor[0][0][0][1], 60);
// ... further checks for other elements ...
}
fn transpose() {
let a = Tensor::from([[1, 2, 3], [4, 5, 6]]);
let b = tensor!([[1, 2, 3], [4, 5, 6]]);
// let iter = a.idx().iter_transposed([1, 0]);
// for idx in iter {
// println!("{idx}");
// }
let b = a.clone().transpose([1, 0]).unwrap();
println!("a: {}", a);
println!("ta: {}", b);
}
fn main() {
// tensor_product();
// test_tensor_contraction_23x32();
// test_tensor_contraction_rank3();
transpose();
}