Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
-
Updated
May 11, 2021 - Python
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Half-precision floating point types f16 and bf16 for Rust.
Up to 200x Faster Inner Products and Vector Similarity — for Python, JavaScript, Rust, C, and Swift, supporting f64, f32, f16 real & complex, i8, and binary vectors using SIMD for both x86 AVX2 & AVX-512 and Arm NEON & SVE 📐
🎯 Accumulated Gradients for TensorFlow 2
half float library for C and for z80
Stage 3 IEEE 754 half-precision floating-point ponyfill
float16 provides IEEE 754 half-precision format (binary16) with correct conversions to/from float32
CPP20 implementation of a 16-bit floating-point type mimicking most of the IEEE 754 behavior. Single file and header-only.
Main purpose of this library is to provide functions for conversion to and from half precision (16bit) floating point numbers. It also provides functions for basic arithmetic and comparison of half floats.
TFLite applications: Optimized .tflite models (i.e. lightweight and low latency) and code to run directly on your Microcontroller!
Cube root of half-precision floating-point epsilon.
Square root of half-precision floating-point epsilon.
Half-precision 16-bit floating point numbers
Add a description, image, and links to the float16 topic page so that developers can more easily learn about it.
To associate your repository with the float16 topic, visit your repo's landing page and select "manage topics."