Note: The current implementation of om_generator() uses tf.numpy_function and inherits the same constraints. The elements generated by generator must be compatible with the given output_types and (optional) output_shapes arguments. The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. View output_types, output_shapes=None, args=NoneĬreates a Dataset whose elements are generated by generator. Tf.() is a generalization of flat_map, since flat_map produces the same output as tf.(cycle_length=1) ArgsĪ function mapping a dataset element to a dataset. For example, to flatten a dataset of batches into a dataset of their elements: a = om_tensor_slices(,, ])Ī.flat_map(lambda x: om_tensor_slices(x + 1)) # => Use flat_map if you want to make sure that the order of your dataset stays the same. Maps map_func across this dataset and flattens the result. ArgsĪ function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2. Instructions for updating: Use `tf.() Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. The Dataset containing the elements of this dataset for which predicate is True.įilters this dataset according to predicate. # `tf.math.equal(x, y)` is required for equality comparisonĪ function mapping a dataset element to a boolean. For example: # NOTE: The following examples use `Ī tf.int64 scalar tf.Tensor, representing the start value for enumeration.įilters this dataset according to predicate. # a.concatenate(c) and a.concatenate(d) would result in error.Ī.concatenate(b) # => # The input dataset and dataset to be concatenated should have the same If a filename is not provided, the dataset will be cached in memory.Ĭreates a Dataset by concatenating the given dataset with this dataset. ArgsĪ tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements the default behavior is not to drop the smaller batch.Ĭaches the elements in this dataset. ArgsĪ tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). The Dataset returned by applying transformation_func to this dataset.Ĭombines consecutive elements of this dataset into batches. apply(group_by_window(key_func, reduce_func, window_size))Ī function that takes one Dataset argument and returns a Dataset. For example: dataset = (dataset.map(lambda x: x ** 2) Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset).Īpplies a transformation function to this dataset.Īpply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset. Returns the type of each component of an element of this dataset. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset). Returns the shape of each component of an element of this dataset. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset). Returns the class of each component of an element of this dataset. The type specification of an element of this dataset. ArgsĪ DT_VARIANT tensor that represents the dataset. Represents a potentially large set of elements.Ī Dataset can be used to represent an input pipeline as a collection of elements and a "logical plan" of transformations that act on those elements.
0 Comments
Leave a Reply. |