Skip to content

Commit

Permalink
auto-generating sphinx docs
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Oct 31, 2024
1 parent 10ccc14 commit d613a7e
Show file tree
Hide file tree
Showing 4 changed files with 26 additions and 13 deletions.
31 changes: 22 additions & 9 deletions main/_modules/torchtune/models/qwen2/_model_builders.html
Original file line number Diff line number Diff line change
Expand Up @@ -434,13 +434,13 @@ <h1>Source code for torchtune.models.qwen2._model_builders</h1><div class="highl
<span class="c1"># LICENSE file in the root directory of this source tree.</span>
<span class="kn">from</span> <span class="nn">typing</span> <span class="kn">import</span> <span class="n">List</span><span class="p">,</span> <span class="n">Optional</span>

<span class="kn">from</span> <span class="nn">torchtune.models.qwen2._component_builders</span> <span class="kn">import</span> <span class="n">qwen2</span><span class="p">,</span> <span class="n">lora_qwen2</span>
<span class="kn">from</span> <span class="nn">torchtune.models.qwen2._tokenizer</span> <span class="kn">import</span> <span class="n">Qwen2Tokenizer</span>
<span class="kn">from</span> <span class="nn">torchtune.data._prompt_templates</span> <span class="kn">import</span> <span class="n">_get_prompt_template</span><span class="p">,</span> <span class="n">_TemplateType</span>

<span class="kn">from</span> <span class="nn">torchtune.models.qwen2._component_builders</span> <span class="kn">import</span> <span class="n">lora_qwen2</span><span class="p">,</span> <span class="n">qwen2</span>
<span class="kn">from</span> <span class="nn">torchtune.models.qwen2._tokenizer</span> <span class="kn">import</span> <span class="n">QWEN2_SPECIAL_TOKENS</span><span class="p">,</span> <span class="n">Qwen2Tokenizer</span>
<span class="kn">from</span> <span class="nn">torchtune.modules</span> <span class="kn">import</span> <span class="n">TransformerDecoder</span>
<span class="kn">from</span> <span class="nn">torchtune.modules.peft</span> <span class="kn">import</span> <span class="n">LORA_ATTN_MODULES</span>
<span class="kn">from</span> <span class="nn">torchtune.modules.tokenizers</span> <span class="kn">import</span> <span class="n">parse_hf_tokenizer_json</span>
<span class="kn">from</span> <span class="nn">torchtune.data._prompt_templates</span> <span class="kn">import</span> <span class="n">_TemplateType</span>
<span class="kn">from</span> <span class="nn">torchtune.data._prompt_templates</span> <span class="kn">import</span> <span class="n">_get_prompt_template</span>

<span class="sd">&quot;&quot;&quot;</span>
<span class="sd">Model builders build specific instantiations using component builders. For example</span>
Expand Down Expand Up @@ -530,7 +530,7 @@ <h1>Source code for torchtune.models.qwen2._model_builders</h1><div class="highl
<span class="n">merges_file</span><span class="p">:</span> <span class="nb">str</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
<span class="n">special_tokens_path</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">str</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
<span class="n">max_seq_len</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="nb">int</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
<span class="n">prompt_template</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">_TemplateType</span><span class="p">]</span> <span class="o">=</span> <span class="s2">&quot;torchtune.data.ChatMLTemplate&quot;</span><span class="p">,</span>
<span class="n">prompt_template</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">_TemplateType</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span>
<span class="o">**</span><span class="n">kwargs</span><span class="p">,</span>
<span class="p">)</span> <span class="o">-&gt;</span> <span class="n">Qwen2Tokenizer</span><span class="p">:</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
Expand All @@ -547,14 +547,27 @@ <h1>Source code for torchtune.models.qwen2._model_builders</h1><div class="highl
<span class="sd"> prompt_template (Optional[_TemplateType]): optional specified prompt template.</span>
<span class="sd"> If a string, it is assumed to be the dotpath of a :class:`~torchtune.data.PromptTemplateInterface`</span>
<span class="sd"> class. If a dictionary, it is assumed to be a custom prompt template mapping role to the</span>
<span class="sd"> prepend/append tags. Default is :class:`~torchtune.models.llama2.Llama2ChatTemplate`.</span>
<span class="sd"> prepend/append tags. Default is None.</span>

<span class="sd"> Returns:</span>
<span class="sd"> Qwen2Tokenizer: Instantiation of the Qwen2 tokenizer</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">special_tokens</span> <span class="o">=</span> <span class="n">parse_hf_tokenizer_json</span><span class="p">(</span><span class="n">special_tokens_path</span><span class="p">)</span> <span class="k">if</span> <span class="n">special_tokens_path</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span> <span class="k">else</span> <span class="kc">None</span>
<span class="n">template</span> <span class="o">=</span> <span class="n">_get_prompt_template</span><span class="p">(</span><span class="n">prompt_template</span><span class="p">)</span> <span class="k">if</span> <span class="n">prompt_template</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span> <span class="k">else</span> <span class="kc">None</span>
<span class="k">return</span> <span class="n">Qwen2Tokenizer</span><span class="p">(</span><span class="n">path</span><span class="o">=</span><span class="n">path</span><span class="p">,</span> <span class="n">merges_file</span><span class="o">=</span><span class="n">merges_file</span><span class="p">,</span> <span class="n">special_tokens</span><span class="o">=</span><span class="n">special_tokens</span><span class="p">,</span> <span class="n">max_seq_len</span><span class="o">=</span><span class="n">max_seq_len</span><span class="p">,</span> <span class="n">prompt_template</span><span class="o">=</span><span class="n">template</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<span class="n">special_tokens</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">parse_hf_tokenizer_json</span><span class="p">(</span><span class="n">special_tokens_path</span><span class="p">)</span>
<span class="k">if</span> <span class="n">special_tokens_path</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span>
<span class="k">else</span> <span class="n">QWEN2_SPECIAL_TOKENS</span>
<span class="p">)</span>
<span class="n">template</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">_get_prompt_template</span><span class="p">(</span><span class="n">prompt_template</span><span class="p">)</span> <span class="k">if</span> <span class="n">prompt_template</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span> <span class="k">else</span> <span class="kc">None</span>
<span class="p">)</span>
<span class="k">return</span> <span class="n">Qwen2Tokenizer</span><span class="p">(</span>
<span class="n">path</span><span class="o">=</span><span class="n">path</span><span class="p">,</span>
<span class="n">merges_file</span><span class="o">=</span><span class="n">merges_file</span><span class="p">,</span>
<span class="n">special_tokens</span><span class="o">=</span><span class="n">special_tokens</span><span class="p">,</span>
<span class="n">max_seq_len</span><span class="o">=</span><span class="n">max_seq_len</span><span class="p">,</span>
<span class="n">prompt_template</span><span class="o">=</span><span class="n">template</span><span class="p">,</span>
<span class="o">**</span><span class="n">kwargs</span><span class="p">,</span>
<span class="p">)</span></div>


<div class="viewcode-block" id="lora_qwen2_7b"><a class="viewcode-back" href="../../../../generated/torchtune.models.qwen2.lora_qwen2_7b.html#torchtune.models.qwen2.lora_qwen2_7b">[docs]</a><span class="k">def</span> <span class="nf">lora_qwen2_7b</span><span class="p">(</span>
Expand Down
2 changes: 1 addition & 1 deletion main/generated/torchtune.data.ChatMLTemplate.html
Original file line number Diff line number Diff line change
Expand Up @@ -437,7 +437,7 @@
<h1>ChatMLTemplate<a class="headerlink" href="#chatmltemplate" title="Permalink to this heading"></a></h1>
<dl class="py class">
<dt class="sig sig-object py" id="torchtune.data.ChatMLTemplate">
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">torchtune.data.</span></span><span class="sig-name descname"><span class="pre">ChatMLTemplate</span></span><a class="reference internal" href="../_modules/torchtune/data/_prompt_templates.html#ChatMLTemplate"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torchtune.data.ChatMLTemplate" title="Permalink to this definition"></a></dt>
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">torchtune.data.</span></span><span class="sig-name descname"><span class="pre">ChatMLTemplate</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="o"><span class="pre">*</span></span><span class="n"><span class="pre">args</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/torchtune/data/_prompt_templates.html#ChatMLTemplate"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torchtune.data.ChatMLTemplate" title="Permalink to this definition"></a></dt>
<dd><p>OpenAI’s <a class="reference external" href="https://github.com/MicrosoftDocs/azure-docs/blob/772c14eeabfa0c0c561d5c2d34ef19341f528b7b/articles/ai-services/openai/how-to/chat-markup-language.md">Chat Markup Language</a>
used by their chat models.</p>
<p>It is the default chat template used by Hugging Face models.</p>
Expand Down
Loading

0 comments on commit d613a7e

Please sign in to comment.