| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991 |
- # Copyright 2020 The HuggingFace Team. All rights reserved.
- #
- # Licensed under the Apache License, Version 2.0 (the "License");
- # you may not use this file except in compliance with the License.
- # You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
- from __future__ import annotations
- import warnings
- from dataclasses import dataclass
- from typing import List, Optional, Tuple
- import tensorflow as tf
- from .utils import ModelOutput
- @dataclass
- class TFBaseModelOutput(ModelOutput):
- """
- Base class for model's outputs, with potential hidden states and attentions.
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- hidden_states (`tuple(tf.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- last_hidden_state: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFBaseModelOutputWithNoAttention(ModelOutput):
- """
- Base class for model's outputs, with potential hidden states.
- Args:
- last_hidden_state (`tf.Tensor` shape `(batch_size, num_channels, height, width)`):
- Sequence of hidden-states at the output of the last layer of the model.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each layer) of shape `(batch_size, num_channels, height, width)`.
- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- """
- last_hidden_state: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor, ...]] = None
- @dataclass
- class TFBaseModelOutputWithPooling(ModelOutput):
- """
- Base class for model's outputs that also contains a pooling of the last hidden states.
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`):
- Last layer hidden-state of the first token of the sequence (classification token) further processed by a
- Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
- prediction (classification) objective during pretraining.
- This output is usually *not* a good summary of the semantic content of the input, you're often better with
- averaging or pooling the sequence of hidden-states for the whole input sequence.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- last_hidden_state: tf.Tensor = None
- pooler_output: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFBaseModelOutputWithPoolingAndNoAttention(ModelOutput):
- """
- Base class for model's outputs that also contains a pooling of the last hidden states.
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, num_channels, height, width)`):
- Sequence of hidden-states at the output of the last layer of the model.
- pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`):
- Last layer hidden-state after a pooling operation on the spatial dimensions.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each layer) of shape `(batch_size, num_channels, height, width)`.
- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- """
- last_hidden_state: tf.Tensor = None
- pooler_output: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor, ...]] = None
- @dataclass
- class TFBaseModelOutputWithPoolingAndCrossAttentions(ModelOutput):
- """
- Base class for model's outputs that also contains a pooling of the last hidden states.
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- pooler_output (`tf.Tensor` of shape `(batch_size, hidden_size)`):
- Last layer hidden-state of the first token of the sequence (classification token) further processed by a
- Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
- prediction (classification) objective during pretraining.
- This output is usually *not* a good summary of the semantic content of the input, you're often better with
- averaging or pooling the sequence of hidden-states for the whole input sequence.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- """
- last_hidden_state: tf.Tensor = None
- pooler_output: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- cross_attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFBaseModelOutputWithPast(ModelOutput):
- """
- Base class for model's outputs that may also contain a past key/values (to speed up sequential decoding).
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
- hidden_size)` is output.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- last_hidden_state: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFBaseModelOutputWithCrossAttentions(ModelOutput):
- """
- Base class for model's outputs, with potential hidden states and attentions.
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- hidden_states (`tuple(tf.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- """
- last_hidden_state: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- cross_attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFBaseModelOutputWithPastAndCrossAttentions(ModelOutput):
- """
- Base class for model's outputs that may also contain a past key/values (to speed up sequential decoding).
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
- hidden_size)` is output.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- """
- last_hidden_state: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- cross_attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFSeq2SeqModelOutput(ModelOutput):
- """
- Base class for model encoder's outputs that also contains : pre-computed hidden states that can speed up sequential
- decoding.
- Args:
- last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the decoder of the model.
- If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
- hidden_size)` is output.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
- used (see `past_key_values` input) to speed up sequential decoding.
- decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
- decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder of the model.
- encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
- encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- """
- last_hidden_state: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- decoder_hidden_states: Tuple[tf.Tensor] | None = None
- decoder_attentions: Tuple[tf.Tensor] | None = None
- cross_attentions: Tuple[tf.Tensor] | None = None
- encoder_last_hidden_state: tf.Tensor | None = None
- encoder_hidden_states: Tuple[tf.Tensor] | None = None
- encoder_attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFCausalLMOutput(ModelOutput):
- """
- Base class for causal language model (or autoregressive) outputs.
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Language modeling loss (for next-token prediction).
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFCausalLMOutputWithPast(ModelOutput):
- """
- Base class for causal language model (or autoregressive) outputs.
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Language modeling loss (for next-token prediction).
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFCausalLMOutputWithCrossAttentions(ModelOutput):
- """
- Base class for causal language model (or autoregressive) outputs.
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Language modeling loss (for next-token prediction).
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- cross_attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFMaskedLMOutput(ModelOutput):
- """
- Base class for masked language models outputs.
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Masked language modeling (MLM) loss.
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFSeq2SeqLMOutput(ModelOutput):
- """
- Base class for sequence-to-sequence language models outputs.
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `labels` is provided):
- Language modeling loss.
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
- used (see `past_key_values` input) to speed up sequential decoding.
- decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
- decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
- weighted average in the cross-attention heads.
- encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder of the model.
- encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
- encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- decoder_hidden_states: Tuple[tf.Tensor] | None = None
- decoder_attentions: Tuple[tf.Tensor] | None = None
- cross_attentions: Tuple[tf.Tensor] | None = None
- encoder_last_hidden_state: tf.Tensor | None = None
- encoder_hidden_states: Tuple[tf.Tensor] | None = None
- encoder_attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFNextSentencePredictorOutput(ModelOutput):
- """
- Base class for outputs of models predicting if two sentences are consecutive or not.
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of non-masked labels, returned when `next_sentence_label` is provided):
- Next sentence prediction loss.
- logits (`tf.Tensor` of shape `(batch_size, 2)`):
- Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
- before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFSequenceClassifierOutput(ModelOutput):
- """
- Base class for outputs of sentence classification models.
- Args:
- loss (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFSeq2SeqSequenceClassifierOutput(ModelOutput):
- """
- Base class for outputs of sequence-to-sequence sentence classification models.
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `label` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
- used (see `past_key_values` input) to speed up sequential decoding.
- decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
- decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- cross_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`
- encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder of the model.
- encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
- encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- decoder_hidden_states: Tuple[tf.Tensor] | None = None
- decoder_attentions: Tuple[tf.Tensor] | None = None
- cross_attentions: Tuple[tf.Tensor] | None = None
- encoder_last_hidden_state: tf.Tensor | None = None
- encoder_hidden_states: Tuple[tf.Tensor] | None = None
- encoder_attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFSemanticSegmenterOutput(ModelOutput):
- """
- Base class for outputs of semantic segmentation models.
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels, logits_height, logits_width)`):
- Classification scores for each pixel.
- <Tip warning={true}>
- The logits returned do not necessarily have the same size as the `pixel_values` passed as inputs. This is
- to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
- original image size as post-processing. You should always check your logits shape and resize as needed.
- </Tip>
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each layer) of shape `(batch_size, patch_size, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFSemanticSegmenterOutputWithNoAttention(ModelOutput):
- """
- Base class for outputs of semantic segmentation models that do not output attention scores.
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels, logits_height, logits_width)`):
- Classification scores for each pixel.
- <Tip warning={true}>
- The logits returned do not necessarily have the same size as the `pixel_values` passed as inputs. This is
- to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
- original image size as post-processing. You should always check your logits shape and resize as needed.
- </Tip>
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each layer) of shape `(batch_size, patch_size, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFImageClassifierOutput(ModelOutput):
- """
- Base class for outputs of image classification models.
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called
- feature maps) of the model at the output of each stage.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFMultipleChoiceModelOutput(ModelOutput):
- """
- Base class for outputs of multiple choice models.
- Args:
- loss (`tf.Tensor` of shape *(batch_size, )*, *optional*, returned when `labels` is provided):
- Classification loss.
- logits (`tf.Tensor` of shape `(batch_size, num_choices)`):
- *num_choices* is the second dimension of the input tensors. (see *input_ids* above).
- Classification scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFTokenClassifierOutput(ModelOutput):
- """
- Base class for outputs of token classification models.
- Args:
- loss (`tf.Tensor` of shape `(n,)`, *optional*, where n is the number of unmasked labels, returned when `labels` is provided) :
- Classification loss.
- logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.num_labels)`):
- Classification scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFQuestionAnsweringModelOutput(ModelOutput):
- """
- Base class for outputs of question answering models.
- Args:
- loss (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `start_positions` and `end_positions` are provided):
- Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
- start_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`):
- Span-start scores (before SoftMax).
- end_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`):
- Span-end scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- start_logits: tf.Tensor = None
- end_logits: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFSeq2SeqQuestionAnsweringModelOutput(ModelOutput):
- """
- Base class for outputs of sequence-to-sequence question answering models.
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
- start_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`):
- Span-start scores (before SoftMax).
- end_logits (`tf.Tensor` of shape `(batch_size, sequence_length)`):
- Span-end scores (before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
- used (see `past_key_values` input) to speed up sequential decoding.
- decoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
- decoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- encoder_last_hidden_state (`tf.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder of the model.
- encoder_hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
- encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
- self-attention heads.
- """
- loss: tf.Tensor | None = None
- start_logits: tf.Tensor = None
- end_logits: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- decoder_hidden_states: Tuple[tf.Tensor] | None = None
- decoder_attentions: Tuple[tf.Tensor] | None = None
- encoder_last_hidden_state: tf.Tensor | None = None
- encoder_hidden_states: Tuple[tf.Tensor] | None = None
- encoder_attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFSequenceClassifierOutputWithPast(ModelOutput):
- """
- Base class for outputs of sentence classification models.
- Args:
- loss (`tf.Tensor` of shape `(batch_size, )`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- past_key_values (`List[tf.Tensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- List of `tf.Tensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size, num_heads,
- sequence_length, embed_size_per_head)`).
- Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
- `past_key_values` input) to speed up sequential decoding.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape
- `(batch_size, sequence_length, hidden_size)`.
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- past_key_values: List[tf.Tensor] | None = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @dataclass
- class TFImageClassifierOutputWithNoAttention(ModelOutput):
- """
- Base class for outputs of image classification models.
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
- Classification (or regression if config.num_labels==1) loss.
- logits (`tf.Tensor` of shape `(batch_size, config.num_labels)`):
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called
- feature maps) of the model at the output of each stage.
- """
- loss: tf.Tensor | None = None
- logits: tf.Tensor = None
- hidden_states: Optional[Tuple[tf.Tensor, ...]] = None
- @dataclass
- class TFMaskedImageModelingOutput(ModelOutput):
- """
- Base class for outputs of masked image completion / in-painting models.
- Args:
- loss (`tf.Tensor` of shape `(1,)`, *optional*, returned when `bool_masked_pos` is provided):
- Reconstruction loss.
- reconstruction (`tf.Tensor` of shape `(batch_size, num_channels, height, width)`):
- Reconstructed / completed images.
- hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when
- `config.output_hidden_states=True`):
- Tuple of `tf.Tensor` (one for the output of the embeddings, if the model has an embedding layer, + one for
- the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called
- feature maps) of the model at the output of each stage.
- attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when
- `config.output_attentions=True`):
- Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, sequence_length)`.
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
- loss: tf.Tensor | None = None
- reconstruction: tf.Tensor = None
- hidden_states: Tuple[tf.Tensor] | None = None
- attentions: Tuple[tf.Tensor] | None = None
- @property
- def logits(self):
- warnings.warn(
- "logits attribute is deprecated and will be removed in version 5 of Transformers."
- " Please use the reconstruction attribute to retrieve the final output instead.",
- FutureWarning,
- )
- return self.reconstruction
|