Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: thresholded relu #391

Merged
merged 9 commits into from
Oct 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@
* [nn.softplus](framework/operators/neural-network/nn.softplus.md)
* [nn.linear](framework/operators/neural-network/nn.linear.md)
* [nn.hard\_sigmoid](framework/operators/neural-network/nn.hard\_sigmoid.md)
* [nn.thresholded\_relu](framework/operators/neural-network/nn.thresholded_relu.md)
* [Machine Learning](framework/operators/machine-learning/README.md)
* [Tree Regressor](framework/operators/machine-learning/tree-regressor/README.md)
* [tree.predict](framework/operators/machine-learning/tree-regressor/tree.predict.md)
Expand Down
1 change: 1 addition & 0 deletions docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ You can see below the list of current supported ONNX Operators:
| [Flatten](operators/tensor/tensor.flatten.md) | :white\_check\_mark: |
| [Relu](operators/neural-network/nn.relu.md) | :white\_check\_mark: |
| [LeakyRelu](operators/neural-network/nn.leaky\_relu.md) | :white\_check\_mark: |
|[ThresholdedRelu](operators/neural-network/nn.thresholded\_relu.md)| :white\_check\_mark: |
| [Sigmoid](operators/neural-network/nn.sigmoid.md) | :white\_check\_mark: |
| [Softmax](operators/neural-network/nn.softmax.md) | :white\_check\_mark: |
| [LogSoftmax](operators/neural-network/nn.logsoftmax.md) | :white\_check\_mark: |
Expand Down
1 change: 1 addition & 0 deletions docs/framework/operators/neural-network/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,4 +32,5 @@ Orion supports currently these `NN` types.
| [`nn.softplus`](nn.softplus.md) | Applies the Softplus function element-wise. |
| [`nn.linear`](nn.linear.md) | Performs a linear transformation of the input tensor using the provided weights and bias. |
| [`nn.hard_sigmoid`](nn.hard\_sigmoid.md) | Applies the Hard Sigmoid function to an n-dimensional input tensor. |
| [`nn.thresholded_relu`](nn.thresholded\_relu.md) | performs the thresholded relu activation function element-wise. |

47 changes: 47 additions & 0 deletions docs/framework/operators/neural-network/nn.thresholded_relu.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# NNTrait::thresholded_relu

```rust
fn thresholded_relu(tensor: @Tensor<T>, alpha: @T) -> Tensor<T>
```

Applies the thresholded rectified linear unit (Thresholded ReLU) activation function element-wise to a given tensor.

The Thresholded ReLU function is defined as f(x) = x if x > alpha, f(x) = 0 otherwise, where x is the input element.

## Args
* `tensor`(`@Tensor<T>`) - A snapshot of a tensor to which the Leaky ReLU function will be applied.
* `alpha`(`@T`) - A snapshot of a fixed point scalar that defines the alpha value of the Thresholded ReLU function.

## Returns
A new fixed point tensor with the same shape as the input tensor and the Thresholded ReLU function applied element-wise.

## Type Constraints

Constrain input and output types to fixed point tensors.

## Examples

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, FP8x23};
use orion::operators::nn::{NNTrait, FP8x23NN};
use orion::numbers::{FP8x23, FixedTrait};

fn thresholded_relu_example() -> Tensor<FP8x23> {
let tensor = TensorTrait::<FP8x23>::new(
shape: array![2, 2].span(),
data: array![
FixedTrait::new(0, false),
FixedTrait::new(256, false),
FixedTrait::new(512, false),
FixedTrait::new(257, false),
]
.span(),
);
let alpha = FixedTrait::from_felt(256); // 1.0

return NNTrait::leaky_relu(@tensor, @alpha);
}
>>> [[0, 0], [512, 257]]
```
44 changes: 44 additions & 0 deletions nodegen/node/thresholded_relu.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
import numpy as np
from nodegen.node import RunAll
from ..helpers import make_node, make_test, to_fp, Tensor, Dtype, FixedImpl, Trait


class Thresholded_relu(RunAll):

@staticmethod
def thresholded_relu_fp8x23():

alpha = 1.0

x = np.random.uniform(-5, 7, (2, 2)).astype(np.float64)
y = np.clip(x, alpha, np.inf)
y[y == alpha] = 0

x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))

name = "thresholded_relu_fp8x23"
make_node([x], [y], name)
make_test([x], y, "NNTrait::thresholded_relu(@input_0, @FixedTrait::new(256, false))",
name, Trait.NN)

@staticmethod
def thresholded_relu_fp16x16():

alpha = 1.0

x = np.random.uniform(-5, 7, (2, 2)).astype(np.float64)
y = np.clip(x, alpha, np.inf)
y[y == alpha] = 0

x = Tensor(Dtype.FP16x16, x.shape, to_fp(
x.flatten(), FixedImpl.FP16x16))
y = Tensor(Dtype.FP16x16, y.shape, to_fp(
y.flatten(), FixedImpl.FP16x16))

name = "thresholded_relu_fp16x16"
make_node([x], [y], name)
make_test([x], y, "NNTrait::thresholded_relu(@input_0, @FixedTrait::new(65536, false))",
name, Trait.NN)
50 changes: 50 additions & 0 deletions src/operators/nn/core.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ use orion::operators::tensor::core::Tensor;
/// softplus - Applies the Softplus function element-wise.
/// linear - Performs a linear transformation of the input tensor using the provided weights and bias.
/// hard_sigmoid - Applies the Hard Sigmoid function to an n-dimensional input tensor.
/// thresholded_relu - performs the thresholded relu activation function element-wise.
trait NNTrait<T> {
/// # NNTrait::relu
///
Expand Down Expand Up @@ -507,4 +508,53 @@ trait NNTrait<T> {
/// ```
///
fn hard_sigmoid(tensor: @Tensor<T>, alpha: @T, beta: @T) -> Tensor<T>;
/// # NNTrait::thresholded_relu
///
/// ```rust
/// fn thresholded_relu(tensor: @Tensor<T>, alpha: @T) -> Tensor<T>
/// ```
///
/// Applies the thresholded rectified linear unit (Thresholded ReLU) activation function element-wise to a given tensor.
///
/// The Thresholded ReLU function is defined as f(x) = x if x > alpha, f(x) = 0 otherwise, where x is the input element.
///
/// ## Args
/// * `tensor`(`@Tensor<T>`) - A snapshot of a tensor to which the Leaky ReLU function will be applied.
/// * `alpha`(`@T`) - A snapshot of a fixed point scalar that defines the alpha value of the Thresholded ReLU function.
///
/// ## Returns
/// A new fixed point tensor with the same shape as the input tensor and the Thresholded ReLU function applied element-wise.
///
/// ## Type Constraints
///
/// Constrain input and output types to fixed point tensors.
///
/// ## Examples
///
/// ```rust
/// use array::{ArrayTrait, SpanTrait};
///
/// use orion::operators::tensor::{TensorTrait, Tensor, FP8x23};
/// use orion::operators::nn::{NNTrait, FP8x23NN};
/// use orion::numbers::{FP8x23, FixedTrait};
///
/// fn thresholded_relu_example() -> Tensor<FP8x23> {
/// let tensor = TensorTrait::<FP8x23>::new(
/// shape: array![2, 2].span(),
/// data: array![
/// FixedTrait::new(0, false),
/// FixedTrait::new(256, false),
/// FixedTrait::new(512, false),
/// FixedTrait::new(257, false),
/// ]
/// .span(),
/// );
/// let alpha = FixedTrait::from_felt(256); // 1.0
///
/// return NNTrait::leaky_relu(@tensor, @alpha);
/// }
/// >>> [[0, 0], [512, 257]]
/// ```
///
fn thresholded_relu(tensor: @Tensor<T>, alpha: @T) -> Tensor<T>;
}
1 change: 1 addition & 0 deletions src/operators/nn/functional.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ mod softsign;
mod softplus;
mod linear;
mod logsoftmax;
mod thresholded_relu;
mod hard_sigmoid;
38 changes: 38 additions & 0 deletions src/operators/nn/functional/thresholded_relu.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
use array::ArrayTrait;
use array::SpanTrait;
use option::OptionTrait;

use orion::numbers::NumberTrait;
use orion::operators::tensor::core::{Tensor, TensorTrait};

/// Cf: NNTrait::thresholded_relu docstring
fn thresholded_relu<
T,
MAG,
impl TTensor: TensorTrait<T>,
impl TNumber: NumberTrait<T, MAG>,
impl TPartialOrd: PartialOrd<T>,
impl TCopy: Copy<T>,
impl TDrop: Drop<T>
>(
mut z: Tensor<T>, alpha: @T
) -> Tensor<T> {
let mut data_result = ArrayTrait::<T>::new();

loop {
match z.data.pop_front() {
Option::Some(item) => {
if (*item) <= (*alpha) {
data_result.append(NumberTrait::zero());
} else {
data_result.append(*item);
};
},
Option::None(_) => {
break;
}
};
};

return TensorTrait::new(z.shape, data_result.span());
}
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_fp16x16.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,10 @@ impl FP16x16NN of NNTrait<FP16x16> {
functional::leaky_relu::leaky_relu(*inputs, alpha)
}

fn thresholded_relu(tensor: @Tensor<FP16x16>, alpha: @FP16x16) -> Tensor<FP16x16> {
functional::thresholded_relu::thresholded_relu(*tensor, alpha)
}

fn hard_sigmoid(tensor: @Tensor<FP16x16>, alpha: @FP16x16, beta: @FP16x16) -> Tensor<FP16x16> {
functional::hard_sigmoid::hard_sigmoid(*tensor, alpha, beta)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_fp32x32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,10 @@ impl FP32x32NN of NNTrait<FP32x32> {
functional::leaky_relu::leaky_relu(*inputs, alpha)
}

fn thresholded_relu(tensor: @Tensor<FP32x32>, alpha: @FP32x32) -> Tensor<FP32x32> {
functional::thresholded_relu::thresholded_relu(*tensor, alpha)
}

fn hard_sigmoid(tensor: @Tensor<FP32x32>, alpha: @FP32x32, beta: @FP32x32) -> Tensor<FP32x32> {
functional::hard_sigmoid::hard_sigmoid(*tensor, alpha, beta)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_fp64x64.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,10 @@ impl FP64x64NN of NNTrait<FP64x64> {
functional::leaky_relu::leaky_relu(*inputs, alpha)
}

fn thresholded_relu(tensor: @Tensor<FP64x64>, alpha: @FP64x64) -> Tensor<FP64x64> {
functional::thresholded_relu::thresholded_relu(*tensor, alpha)
}

fn hard_sigmoid(tensor: @Tensor<FP64x64>, alpha: @FP64x64, beta: @FP64x64) -> Tensor<FP64x64> {
functional::hard_sigmoid::hard_sigmoid(*tensor, alpha, beta)
}
Expand Down
6 changes: 5 additions & 1 deletion src/operators/nn/implementations/nn_fp8x23.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,11 @@ impl FP8x23NN of NNTrait<FP8x23> {
fn leaky_relu(inputs: @Tensor<FP8x23>, alpha: @FP8x23) -> Tensor<FP8x23> {
functional::leaky_relu::leaky_relu(*inputs, alpha)
}


fn thresholded_relu(tensor: @Tensor<FP8x23>, alpha: @FP8x23) -> Tensor<FP8x23> {
functional::thresholded_relu::thresholded_relu(*tensor, alpha)
}

fn hard_sigmoid(tensor: @Tensor<FP8x23>, alpha: @FP8x23, beta: @FP8x23) -> Tensor<FP8x23> {
functional::hard_sigmoid::hard_sigmoid(*tensor, alpha, beta)
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_i32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,10 @@ impl I32NN of NNTrait<i32> {
panic(array!['not supported!'])
}

fn thresholded_relu(tensor: @Tensor<i32>, alpha: @i32) -> Tensor<i32> {
panic(array!['not supported!'])
}

fn hard_sigmoid(tensor: @Tensor<i32>, alpha: @i32, beta: @i32) -> Tensor<i32> {
panic(array!['not supported!'])
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_i8.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,10 @@ impl I8NN of NNTrait<i8> {
panic(array!['not supported!'])
}

fn thresholded_relu(tensor: @Tensor<i8>, alpha: @i8) -> Tensor<i8> {
panic(array!['not supported!'])
}

fn hard_sigmoid(tensor: @Tensor<i8>, alpha: @i8, beta: @i8) -> Tensor<i8> {
panic(array!['not supported!'])
}
Expand Down
4 changes: 4 additions & 0 deletions src/operators/nn/implementations/nn_u32.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,10 @@ impl U32NN of NNTrait<u32> {
panic(array!['not supported!'])
}

fn thresholded_relu(tensor: @Tensor<u32>, alpha: @u32) -> Tensor<u32> {
panic(array!['not supported!'])
}

fn hard_sigmoid(tensor: @Tensor<u32>, alpha: @u32, beta: @u32) -> Tensor<u32> {
panic(array!['not supported!'])
}
Expand Down
2 changes: 2 additions & 0 deletions tests/src/nodes.cairo
Original file line number Diff line number Diff line change
Expand Up @@ -433,6 +433,8 @@ mod clip_i8_2d;
mod clip_i8_3d;
mod clip_u32_2d;
mod clip_u32_3d;
mod thresholded_relu_fp16x16;
mod thresholded_relu_fp8x23;
mod hard_sigmoid_fp8x23;
mod hard_sigmoid_fp16x16;
mod neg_fp16x16;
Expand Down
20 changes: 20 additions & 0 deletions tests/src/nodes/thresholded_relu_fp16x16.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
mod input_0;
mod output_0;


use orion::operators::nn::NNTrait;
use orion::numbers::FixedTrait;
use orion::operators::nn::FP16x16NN;
use orion::operators::tensor::FP16x16TensorPartialEq;
use orion::utils::assert_eq;

#[test]
#[available_gas(2000000000)]
fn test_thresholded_relu_fp16x16() {
let input_0 = input_0::input_0();
let z = output_0::output_0();

let y = NNTrait::thresholded_relu(@input_0, @FixedTrait::new(65536, false));

assert_eq(y, z);
}
18 changes: 18 additions & 0 deletions tests/src/nodes/thresholded_relu_fp16x16/input_0.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
use array::{ArrayTrait, SpanTrait};
use orion::operators::tensor::{TensorTrait, Tensor};
use orion::operators::tensor::FP16x16Tensor;
use orion::numbers::FixedTrait;
use orion::numbers::FP16x16;

fn input_0() -> Tensor<FP16x16> {
let mut shape = ArrayTrait::<usize>::new();
shape.append(2);
shape.append(2);

let mut data = ArrayTrait::new();
data.append(FP16x16 { mag: 240273, sign: true });
data.append(FP16x16 { mag: 61472, sign: true });
data.append(FP16x16 { mag: 255480, sign: false });
data.append(FP16x16 { mag: 300914, sign: false });
TensorTrait::new(shape.span(), data.span())
}
18 changes: 18 additions & 0 deletions tests/src/nodes/thresholded_relu_fp16x16/output_0.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
use array::{ArrayTrait, SpanTrait};
use orion::operators::tensor::{TensorTrait, Tensor};
use orion::operators::tensor::FP16x16Tensor;
use orion::numbers::FixedTrait;
use orion::numbers::FP16x16;

fn output_0() -> Tensor<FP16x16> {
let mut shape = ArrayTrait::<usize>::new();
shape.append(2);
shape.append(2);

let mut data = ArrayTrait::new();
data.append(FP16x16 { mag: 0, sign: false });
data.append(FP16x16 { mag: 0, sign: false });
data.append(FP16x16 { mag: 255480, sign: false });
data.append(FP16x16 { mag: 300914, sign: false });
TensorTrait::new(shape.span(), data.span())
}
20 changes: 20 additions & 0 deletions tests/src/nodes/thresholded_relu_fp8x23.cairo
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
mod input_0;
mod output_0;


use orion::operators::nn::NNTrait;
use orion::numbers::FixedTrait;
use orion::operators::nn::FP8x23NN;
use orion::operators::tensor::FP8x23TensorPartialEq;
use orion::utils::assert_eq;

#[test]
#[available_gas(2000000000)]
fn test_thresholded_relu_fp8x23() {
let input_0 = input_0::input_0();
let z = output_0::output_0();

let y = NNTrait::thresholded_relu(@input_0, @FixedTrait::new(256, false));

assert_eq(y, z);
}
Loading