text
stringlengths 0
1.46k
|
|---|
test_split: Float between 0 and 1. Fraction of the dataset to be used as test data. Defaults to 0.2, meaning 20% of the dataset is used as test data.
|
seed: int. Seed for reproducible data shuffling.
|
start_char: int. The start of a sequence will be marked with this character. Defaults to 1 because 0 is usually the padding character.
|
oov_char: int. The out-of-vocabulary character. Words that were cut out because of the num_words or skip_top limits will be replaced with this character.
|
index_from: int. Index actual words with this index and higher.
|
**kwargs: Used for backwards compatibility.
|
Returns
|
Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test).
|
x_train, x_test: lists of sequences, which are lists of indexes (integers). If the num_words argument was specific, the maximum possible index value is num_words - 1. If the maxlen argument was specified, the largest possible sequence length is maxlen.
|
y_train, y_test: lists of integer labels (1 or 0).
|
Note: The 'out of vocabulary' character is only used for words that were present in the training set but are not included because they're not making the num_words cut here. Words that were not seen in the training set but are in the test set have simply been skipped.
|
get_word_index function
|
tf.keras.datasets.reuters.get_word_index(path="reuters_word_index.json")
|
Retrieves a dict mapping words to their index in the Reuters dataset.
|
Arguments
|
path: where to cache the data (relative to ~/.keras/dataset).
|
Returns
|
The word index dictionary. Keys are word strings, values are their index.
|
Boston Housing price regression dataset
|
load_data function
|
tf.keras.datasets.boston_housing.load_data(
|
path="boston_housing.npz", test_split=0.2, seed=113
|
)
|
Loads the Boston Housing dataset.
|
This is a dataset taken from the StatLib library which is maintained at Carnegie Mellon University.
|
Samples contain 13 attributes of houses at different locations around the Boston suburbs in the late 1970s. Targets are the median values of the houses at a location (in k$).
|
The attributes themselves are defined in the StatLib website.
|
Arguments
|
path: path where to cache the dataset locally (relative to ~/.keras/datasets).
|
test_split: fraction of the data to reserve as test set.
|
seed: Random seed for shuffling the data before computing the test split.
|
Returns
|
Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test).
|
x_train, x_test: numpy arrays with shape (num_samples, 13) containing either the training samples (for x_train), or test samples (for y_train).
|
y_train, y_test: numpy arrays of shape (num_samples,) containing the target scalars. The targets are float scalars typically between 10 and 50 that represent the home prices in k$.CIFAR10 small images classification dataset
|
load_data function
|
tf.keras.datasets.cifar10.load_data()
|
Loads the CIFAR10 dataset.
|
This is a dataset of 50,000 32x32 color training images and 10,000 test images, labeled over 10 categories. See more info at the CIFAR homepage.
|
The classes are:
|
Label Description
|
0 airplane
|
1 automobile
|
2 bird
|
3 cat
|
4 deer
|
5 dog
|
6 frog
|
7 horse
|
8 ship
|
9 truck
|
Returns
|
Tuple of NumPy arrays: (x_train, y_train), (x_test, y_test).
|
x_train: uint8 NumPy array of grayscale image data with shapes (50000, 32, 32, 3), containing the training data. Pixel values range from 0 to 255.
|
y_train: uint8 NumPy array of labels (integers in range 0-9) with shape (50000, 1) for the training data.
|
x_test: uint8 NumPy array of grayscale image data with shapes (10000, 32, 32, 3), containing the test data. Pixel values range from 0 to 255.
|
y_test: uint8 NumPy array of labels (integers in range 0-9) with shape (10000, 1) for the test data.
|
Example
|
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
|
assert x_train.shape == (50000, 32, 32, 3)
|
assert x_test.shape == (10000, 32, 32, 3)
|
assert y_train.shape == (50000, 1)
|
assert y_test.shape == (10000, 1)IMDB movie review sentiment classification dataset
|
load_data function
|
tf.keras.datasets.imdb.load_data(
|
path="imdb.npz",
|
num_words=None,
|
skip_top=0,
|
maxlen=None,
|
seed=113,
|
start_char=1,
|
oov_char=2,
|
index_from=3,
|
**kwargs
|
)
|
Loads the IMDB dataset.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.