Skip to content

laurentpayot/graia

Repository files navigation

🌄 Graia

An experimental neural network library.

Goals

  • Not using a retropropagation algorithm for training. Retropropagation works pretty well but the main goal of this project is to find a training algorithm that would work using only the information available locally to the nodes. Current status: not working 😅, as you can see with these non-concluding learning curves: Learning curves The current algorithm is based on the simple Hebbian learning rule "Nodes that fire together, wire together.", meaning that the weight between a sending and a receiving node increases if the two nodes are active at the same time.
  • Using bit shifting instead of multiplication for faster computations. A kind of generalization of BitNet b1.58.
  • Written in Futhark to leverage OpenCL for GPU acceleration.
  • Python API similar to TensorFlow.

Online Demo

There is a painfully slow (no GPU) Python build online demo available on HuggingFace.

For Graia to be usable, you’ll have to use it locally with a GPA as described below.

Prerequisites

To build Graia on a Debian/Ubuntu system, you need:

  • Futhark
  • Futhark FFI pip install futhark-ffi
  • OpenCL
    • Native GPU drivers are prefered but if no OpenCL device is listed with clinfo -l you can install pocl-opencl-icd (slower and sometimes buggy).
    • If missing CL/cl.h error, install opencl-headers.
    • If missing -lOpenCL error, create an OpenCL link: sudo ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1 /usr/lib/libOpenCL.so.

Then simply run make to test and compile the Futhark files to the OpenCL library used by the Graia Python class.

Jupyter Notebooks

To be sure to have all the Python packages needed, Anaconda is highly recommended.

  • Install Futhark FFI: pip install futhark-ffi.
  • As mentioned above, Native GPU drivers are prefered but if they don’t work you can Install PoCL:conda install conda-forge::pocl.

TODO

Releases

No releases published

Packages

No packages published

Languages