Skip to content

Commit

Permalink
Small grammar and rewords
Browse files Browse the repository at this point in the history
  • Loading branch information
gagichce authored May 16, 2024
1 parent b544a95 commit a295656
Showing 1 changed file with 8 additions and 8 deletions.
16 changes: 8 additions & 8 deletions zklayer-whitepaper.tex
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ \section{ZKLayer}

\subsection{Technical Architecture}

To address current blockchain limitations and challenges of running on-chain Neural Networks, ZKLayer is designed to serve as a conduit between off-chain and on-chain architectures. Taking a forward-looking approach, the ZKLayer architecture is inherently modular, a design philosophy that allows each component of the system to be individually updated or replaced.
To address current blockchain limitations and challenges of running on-chain Neural Networks, ZKLayer is designed to serve as a conduit between off-chain and on-chain architectures. Taking a forward-looking approach, the ZKLayer architecture is inherently modular, a design philosophy for each component of the system to be individually updated or replaced.

\begin{figure}[!ht]
\centering
Expand Down Expand Up @@ -337,7 +337,7 @@ \subsection{Persistent Storage}

\subsection{Aggregation Circuits}

As the complexity of a model increases, so does the size of its associated zk-circuit which results in larger proofs. To manage this, aggregation circuits are utilized to amalgamate multiple proofs into a singular, more concise proof that can be submitted on-chain, along with the corresponding output data. This technique also permits the batching of related inferences, enhancing efficiency and reducing the on-chain data storage footprint.
As the complexity of a model increases, so does the size of its associated zk-circuit which results in larger proofs. To manage this, aggregation circuits are utilized to amalgamate multiple proofs into a singular concise proof submitted on-chain, along with the corresponding output data. This technique also permits the batching of related inferences, enhancing efficiency and reducing the on-chain data storage footprint.

\subsection{On-chain Architecture}

Expand Down Expand Up @@ -389,9 +389,9 @@ \subsection{Model Vetting}

As ZKLayer will be an open permission-less network, no party (or even Inference Labs) can decide which models should or shouldn’t be available on the network. Instead an economic system determines how “good” a model is. This is crucial to retain an open and fair censorship free network.

Verified backtesting is published by the model creator and made available to the public. Users get a guarantee the model will perform a certain way under set circumstances rather than relying on blind trust in published accuracy, precision and recall values. While the provided examples may not be representative of real-world use cases as it is self-published by the creator, this can be seen as a move in the right direction. Users also submit inferences one at a time, with no upfront commitments or complicated setup to quickly verify the usefulness of the model for their application.
Verified backtesting is published by the model creator and made available to the public. Users get a guarantee the model will perform a certain way under set circumstances rather than relying on blind trust in published accuracy, precision and recall values. While the provided examples may not be representative of real-world use cases as it is self-published by the creator, this is clearly a move in the right direction. Users also submit inferences one at a time, with no upfront commitments or complicated setup to quickly verify the usefulness of the model for their application.

Aggregating on-chain historical usage of a model results in a proof of its usefulness. How “good” a model is can be answered by its frequency of use, inference by a diverse set of applications and users, and repeat use of a model by a user. In the same way an open-source software package can be evaluated by the number of other projects that depend on it (and subsequently how “good” those packages are).
Aggregating on-chain historical usage of a model results in a proof of its usefulness. How “good” a model is can be answered by its frequency of use, inference by a diverse set of applications and users, and repeat use of a model by a user. In the same way an open-source software package can be evaluated by the number of other projects which depend on it (and subsequently how “good” those packages are).

The network implements a non-zero registration fee for models to prevent flooding of the network with unusable or non-existent models.

Expand Down Expand Up @@ -586,7 +586,7 @@ \section{Current Execution and Deployment}
\end{enumerate}


\section{Remained Challenges and future work}
\section{Remaining Challenges and future work}

This section first discusses remaining concerns such as IP risk and data privacy, then it explores future enhancements for ZKLayer. We anticipate ZKLayer's potential to support emerging technologies like FHEML, verifiable FHE, MPCML, and others. While these technologies currently pose computational challenges and are not yet practical, advancements in technology suggest that efficient solutions will become available over time. Consequently, ZKLayer remains adaptable for updates to accommodate new technologies and services.

Expand All @@ -600,7 +600,7 @@ \subsubsection{Reverse Engineering Risk}

\textbf{* IP Replication Risk}

While not a perfect analogy, one must not assume the process of zk-circuit generation to proving and verification keys is strictly one way as it would be with any secure hash function. There may be artifacts in the keys that give hints about the original circuit design and therefore the underlying model used to generate the circuits~\cite{BenSasson2014SuccinctNZ}. While it is likely computationally impractical to reverse this process and regenerate the original model from the keys, the risk still theoretically exists.
While not a perfect analogy, one must not assume the process of zk-circuit generation to proving and verification keys is strictly one way as it would be with any secure hash function. There may be artifacts in the keys which give hints about the original circuit design and therefore the underlying model used to generate the circuits~\cite{BenSasson2014SuccinctNZ}. While it is likely computationally impractical to reverse this process and regenerate the original model from the keys, the risk still theoretically exists.

\textbf{* Public Verification Key}

Expand All @@ -610,7 +610,7 @@ \subsubsection{Security Risk}

\textbf{* Trusted Setup Risk}

There are a few methods to mitigate this which are in early development. Recently ahead of the Unirep v2 launch, a call to the public was made to assist in a public trusted setup generation process (which Inference Labs proudly participated in) and the tools are open source to repeat this process. This process can be replicated at scale and at the protocol level, which allows nodes on the network to contribute to the process as new models are registered on the network and provide incentives for nodes to participate. This also further increases the security of the setup process and the overall network.
There are a few methods to mitigate this which are in early development. Recently ahead of the Unirep v2 launch, a call to the public was made to assist in a public trusted setup generation process (which Inference Labs proudly participated in) and the tools are open source to repeat this process. This process can be replicated at scale and at the protocol level. When new models are registered, nodes contribute to the process and are incentivized for their participation. This also further increases the security of the setup process and the overall network.


\textbf{* Age of ZK}
Expand Down Expand Up @@ -674,7 +674,7 @@ \subsection{Potential Usages of ZKLayer}

To address this issue, companies should demonstrate the fairness of their AI models. A naïve solution would be for these companies to publicly disclose their algorithms. However, this approach conflicts with their intellectual property rights. Therefore, ZKML proposes a solution whereby companies can prove that they are using a specific algorithm for all users without revealing any information about their models. Kang et al.~\cite{TensorPlonkMedium}. have provided insights into the ZKML system, which operates using GPU acceleration (GPA). The use of GPA can accelerate the proof generation process by over 1000 times. Consequently, they suggest that Twitter could generate proofs for 1\% of the 500 million tweets per day from its users for approximately \$21,000 per day. Given that this cost represents less than 0.5\% of Twitter's annual infrastructure expenses, it is feasible for Twitter to demonstrate the fairness of its feed AI models.

In the future, as AI models increasingly handle decision-making and various tasks, responsible AI will become even more critical than it is today. Simultaneously, with a shift towards decentralization, most communications and transactions are expected to occur on Web3. In such an environment, ZKLayer could play a vital role by enabling AI model operators to broadcast proofs of honesty on Web3 without compromising the confidentiality of their model details.
In the future, as AI models increasingly handle decision-making and various tasks, responsible AI will become even more critical than it is today. Simultaneously, with a shift towards decentralization, most communications and transactions are expected to occur on Web3. In such an environment, ZKLayer will play a vital role by enabling AI model operators to broadcast proofs of honesty on Web3 without compromising the confidentiality of their model details.

\section{Conclusion}
The Zero-Knowledge Layer (ZKLayer) presents a comprehensive solution to the challenges of integrating AI and blockchain technology. It provides a decentralized protocol that enables secure, off-chain AI model inferences while preserving intellectual property through zero-knowledge cryptography. This innovative approach not only enhances privacy and security but also ensures the integrity and authenticity of AI models. The ZKLayer architecture is designed to be modular and adaptable, supporting rapid deployment across multiple blockchain ecosystems. This work reflects a significant step towards realizing a decentralized, secure, and privacy-preserving foundation for AI-enhanced blockchain systems, potentially revolutionizing the way AI operates in the blockchain space and contributing to the broader adoption of web3 technologies.
Expand Down

0 comments on commit a295656

Please sign in to comment.