This repository has been archived by the owner on Aug 13, 2024. It is now read-only.
Replies: 4 comments
-
How would a non coder go about getting this developed? I have read all the documentation and watched many of your videos, studied all the relevant material etc, but I do not have the capability to build this for myself |
Beta Was this translation helpful? Give feedback.
0 replies
-
This might sound a little crazy, but if u have open interpreter, or just
accces to normal code interpreter u can model these things inside chat gpt
itself, it will also as a by product write the code for the functions for
u, and potetntially in the future if computation is increased can run it
inside of itself, i think thats godels or turings observation.
…On Wed, 18 Oct 2023, 3:25 pm SecondWaveAU, ***@***.***> wrote:
How would a non coder go about getting this developed?
I have read all the documentation and watched many of your videos, studied
all the relevant material etc, but I do not have the capability to build
this for myself
—
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BBBVTZZMPU4GBZFKTNGZENLX744XXAVCNFSM6AAAAAA5DKUM7SVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TGMBZHA4DG>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
0 replies
-
here is a basic model to help you understand it - out this in the system
prompt -* you are a "brain" you decide based on the users input weather to
store the input in hot memory or cold memory, and when to access that
memory again for in context insertions of previous data into current input
remember to time and date stamp all information for ease of access.*
then if you are game, get it to create an in memory vector database, with
the segments of cold memory, and a scratch pad to record hot memory, and
ask it to update it at every turn and when you are finished you can ask it
to give you the saved files. this is very basic, but it will help you
understand that this is happening asynchronously in the ACE framework so
you can begin to understand the architecture of the ace framework and how
you can implement this in multiple layers. hope that helps!
…On Wed, Oct 18, 2023 at 6:37 PM Ashley Kelly ***@***.***> wrote:
This might sound a little crazy, but if u have open interpreter, or just
accces to normal code interpreter u can model these things inside chat gpt
itself, it will also as a by product write the code for the functions for
u, and potetntially in the future if computation is increased can run it
inside of itself, i think thats godels or turings observation.
On Wed, 18 Oct 2023, 3:25 pm SecondWaveAU, ***@***.***>
wrote:
> How would a non coder go about getting this developed?
>
> I have read all the documentation and watched many of your videos,
> studied all the relevant material etc, but I do not have the capability to
> build this for myself
>
> —
> Reply to this email directly, view it on GitHub
> <#27 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/BBBVTZZMPU4GBZFKTNGZENLX744XXAVCNFSM6AAAAAA5DKUM7SVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TGMBZHA4DG>
> .
> You are receiving this because you are subscribed to this thread.Message
> ID: ***@***.***
> com>
>
|
Beta Was this translation helpful? Give feedback.
0 replies
-
forgot to mention you watn it to read and write every turn, so it can check
its memory during the convo
…On Wed, Oct 18, 2023 at 9:01 PM Ashley Kelly ***@***.***> wrote:
here is a basic model to help you understand it - out this in the system
prompt -* you are a "brain" you decide based on the users input weather
to store the input in hot memory or cold memory, and when to access that
memory again for in context insertions of previous data into current input
remember to time and date stamp all information for ease of access.*
then if you are game, get it to create an in memory vector database, with
the segments of cold memory, and a scratch pad to record hot memory, and
ask it to update it at every turn and when you are finished you can ask it
to give you the saved files. this is very basic, but it will help you
understand that this is happening asynchronously in the ACE framework so
you can begin to understand the architecture of the ace framework and how
you can implement this in multiple layers. hope that helps!
On Wed, Oct 18, 2023 at 6:37 PM Ashley Kelly ***@***.***>
wrote:
> This might sound a little crazy, but if u have open interpreter, or just
> accces to normal code interpreter u can model these things inside chat gpt
> itself, it will also as a by product write the code for the functions for
> u, and potetntially in the future if computation is increased can run it
> inside of itself, i think thats godels or turings observation.
>
> On Wed, 18 Oct 2023, 3:25 pm SecondWaveAU, ***@***.***>
> wrote:
>
>> How would a non coder go about getting this developed?
>>
>> I have read all the documentation and watched many of your videos,
>> studied all the relevant material etc, but I do not have the capability to
>> build this for myself
>>
>> —
>> Reply to this email directly, view it on GitHub
>> <#27 (comment)>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/BBBVTZZMPU4GBZFKTNGZENLX744XXAVCNFSM6AAAAAA5DKUM7SVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TGMBZHA4DG>
>> .
>> You are receiving this because you are subscribed to this thread.Message
>> ID: ***@***.***
>> com>
>>
>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
https://arxiv.org/abs/2309.09298
OWL: A Large Language Model for IT Operations
With the rapid development of IT operations, it has become increasingly crucial to efficiently manage and analyze large volumes of data for practical applications. The techniques of Natural Language Processing (NLP) have shown remarkable capabilities for various tasks, including named entity recognition, machine translation and dialogue systems. Recently, Large Language Models (LLMs) have achieved significant improvements across various NLP downstream tasks. However, there is a lack of specialized LLMs for IT operations. In this paper, we introduce the OWL, a large language model trained on our collected OWL-Instruct dataset with a wide range of IT-related information, where the mixture-of-adapter strategy is proposed to improve the parameter-efficient tuning across different domains or tasks. Furthermore, we evaluate the performance of our OWL on the OWL-Bench established by us and open IT-related benchmarks. OWL demonstrates superior performance results on IT tasks, which outperforms existing models by significant margins. Moreover, we hope that the findings of our work will provide more insights to revolutionize the techniques of IT operations with specialized LLMs.
Beta Was this translation helpful? Give feedback.
All reactions