287 測定値

「TensorFlow Without Tensors」

長すぎる; 読むには

Tensors は TensorFlow のコアにある多次元のアーレイであり、効率的なデータの表示と操作を可能にします。このガイドは、テネザーの作成、操作、および放送やラグードテネザーなどの高度なコンセプトをカバーし、機械学習の実践者に包括的な理解を提供します。
featured image - 「TensorFlow Without Tensors」
Tensor Flow - [Technical Documentation] HackerNoon profile picture
0-item

コンテンツ概要

  • 基本
  • 形について
  • インデックス
  • 形を操る
  • DTypesについてもっと
  • 放送
  • tf.convert_to_tensor に変換する
  • タンザニア Tensors
  • String テンサー
  • 節約テナント

Tensors are multi-dimensional array with a uniform type (called adtype) すべてのサポートをご覧いただけますdtypesATtf.dtypes.

もしあなたが知り合いならナンパストレッチは(種類の)みたいにnp.arrays.

All tensors are immutable like Python numbers and strings: you can never update the contents of a tensor, only create a new one. すべての tensorsはPythonの数字や文字列のように不変です。

import tensorflow as tf
import numpy as np

2024-08-15 03:05:18.327501: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-15 03:05:18.348450: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-15 03:05:18.354825: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered

基本

まず、いくつかの基本的なテンサーを作成します。

ここに「スカラル」あるいは「ランク-0」のテンサーがあります. A scalar contains a single value, and no "axes".

# This will be an int32 tensor by default; see "dtypes" below.
rank_0_tensor = tf.constant(4)
print(rank_0_tensor)

tf.Tensor(4, shape=(), dtype=int32)
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1723691120.932442  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.936343  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.940040  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.943264  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.954872  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.958376  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.961894  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.964843  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.967730  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.971300  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.974711  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691120.977717  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.208679  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.210786  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.212791  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.214776  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.216798  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.218734  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.220650  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.222554  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.224486  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.226429  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.228329  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.230251  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.269036  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.271069  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.273006  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.274956  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.276917  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.278854  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.280754  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.282664  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.284613  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.287058  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.289508  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
I0000 00:00:1723691122.291891  176945 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355


"vector" or "rank-1" tensor is like a list of values. A vector has one axis: ベクトルは1つの軸を持っています。

# Let's make this a float tensor.
rank_1_tensor = tf.constant([2.0, 3.0, 4.0])
print(rank_1_tensor)
tf.Tensor([2. 3. 4.], shape=(3,), dtype=float32)


A "matrix" or "rank-2" tensor has two axes:

# If you want to be specific, you can set the dtype (see below) at creation time
rank_2_tensor = tf.constant([[1, 2],
                             [3, 4],
                             [5, 6]], dtype=tf.float16)
print(rank_2_tensor)
tf.Tensor(
[[1. 2.]
 [3. 4.]
 [5. 6.]], shape=(3, 2), dtype=float16)

A scalar, shape: []

A vector, shape: [3]

A matrix, shape: [3, 2]

A scalar, the number 4

The line with 3 sections, each one containing a number.

A 3x2 grid, with each cell containing a number.

A scalar, the number 4

The line with 3 sections, each one containing a number.

A 3x2 grid, with each cell containing a number.

Tensors may have more axes; here is a tensor with three axes:

# There can be an arbitrary number of
# axes (sometimes called "dimensions")
rank_3_tensor = tf.constant([
  [[0, 1, 2, 3, 4],
   [5, 6, 7, 8, 9]],
  [[10, 11, 12, 13, 14],
   [15, 16, 17, 18, 19]],
  [[20, 21, 22, 23, 24],
   [25, 26, 27, 28, 29]],])

print(rank_3_tensor)
tf.Tensor(
[[[ 0  1  2  3  4]
  [ 5  6  7  8  9]]

 [[10 11 12 13 14]
  [15 16 17 18 19]]

 [[20 21 22 23 24]
  [25 26 27 28 29]]], shape=(3, 2, 5), dtype=int32)

複数の軸を超えるテンサーを視覚化する方法はたくさんあります。

A 3-axis tensor, shape: [3, 2, 5]



Tensor を NumPy マレーに変換することもできます。np.arrayOR THEtensor.numpy方法:

np.array(rank_2_tensor)
array([[1., 2.],
       [3., 4.],
       [5., 6.]], dtype=float16)
rank_2_tensor.numpy()
array([[1., 2.],
       [3., 4.],
       [5., 6.]], dtype=float16)

センサーはしばしばフロートやインツを含みますが、以下を含む他の多くのタイプがあります。

  • 複雑な数字
  • STRINGS

ベースtf.Tensorclass requires tensors to be "rectangular"---that is, along each axis, each element is the same size. However, there are specialized types of tensors that can handle different shapes. つまり、各軸に沿って、各要素は同じサイズである。

  • ラグジット TENSOR
  • 緊張の節約

追加、要素型の倍増、マトリックスの倍増を含む基本的な数学を行うことができます。

a = tf.constant([[1, 2],
                 [3, 4]])
b = tf.constant([[1, 1],
                 [1, 1]]) # Could have also said `tf.ones([2,2], dtype=tf.int32)`

print(tf.add(a, b), "\n")
print(tf.multiply(a, b), "\n")
print(tf.matmul(a, b), "\n")
tf.Tensor(
[[2 3]
 [4 5]], shape=(2, 2), dtype=int32) 

tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int32) 

tf.Tensor(
[[3 3]
 [7 7]], shape=(2, 2), dtype=int32)
print(a + b, "\n") # element-wise addition
print(a * b, "\n") # element-wise multiplication
print(a @ b, "\n") # matrix multiplication

tf.Tensor(
[[2 3]
 [4 5]], shape=(2, 2), dtype=int32) 

tf.Tensor(
[[1 2]
 [3 4]], shape=(2, 2), dtype=int32) 

tf.Tensor(
[[3 3]
 [7 7]], shape=(2, 2), dtype=int32)

センサーはあらゆる種類の操作(または「Ops」)に使用されます。

c = tf.constant([[4.0, 5.0], [10.0, 1.0]])

# Find the largest value
print(tf.reduce_max(c))
# Find the index of the largest value
print(tf.math.argmax(c))
# Compute the softmax
print(tf.nn.softmax(c))
tf.Tensor(10.0, shape=(), dtype=float32)
tf.Tensor([1 0], shape=(2,), dtype=int64)
tf.Tensor(
[[2.6894143e-01 7.3105854e-01]
 [9.9987662e-01 1.2339458e-04]], shape=(2, 2), dtype=float32)

注: 通常、TensorFlow 関数が入力として Tensor を期待する場所では、tf.convert_to_tensor を使用して Tensor に変換できるすべてのものも受け入れます。

注: 通常、TensorFlow 関数が入力として Tensor を期待する場所では、tf.convert_to_tensor を使用して Tensor に変換できるすべてのものも受け入れます。

tf.convert_to_tensor([1,2,3])
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 2, 3], dtype=int32)>
tf.reduce_max([1,2,3])
<tf.Tensor: shape=(), dtype=int32, numpy=3>
tf.reduce_max(np.array([1,2,3]))
<tf.Tensor: shape=(), dtype=int64, numpy=3>


形について

タイトル: Some vocabulary:

  • 形状: テンサーの各軸の長さ(要素の数)
  • ランク: テンサー軸の数. スカラルはランク0、ベクターはランク1、マトリックスはランク2。
  • Axis or Dimension: Tensor の特定の次元。
  • サイズ:Tensorの要素の合計数、形態ベクターの要素の産物。

注:「二次元テンサー」の参照が見られるかもしれませんが、ランク2テンサーは通常、2D空間を記述しません。

Note:あなたは「二次元のテンサー」の参照を見るかもしれませんが、ランク2のテンサーは通常、2D空間を記述しません。

Tensors andtf.TensorShapeオブジェクトには、これらにアクセスするための便利な属性があります:

rank_4_tensor = tf.zeros([3, 2, 4, 5])


A rank-4 tensor, shape: [3, 2, 4, 5]

A tensor shape is like a vector.

A 4-axis tensor

A tensor shape is like a vector.

A 4-axis tensor

print("Type of every element:", rank_4_tensor.dtype)
print("Number of axes:", rank_4_tensor.ndim)
print("Shape of tensor:", rank_4_tensor.shape)
print("Elements along axis 0 of tensor:", rank_4_tensor.shape[0])
print("Elements along the last axis of tensor:", rank_4_tensor.shape[-1])
print("Total number of elements (3*2*4*5): ", tf.size(rank_4_tensor).numpy())
Type of every element: <dtype: 'float32'>
Number of axes: 4
Shape of tensor: (3, 2, 4, 5)
Elements along axis 0 of tensor: 3
Elements along the last axis of tensor: 5
Total number of elements (3*2*4*5):  120

しかし、注目すべきは、TheTensor.ndimそしてTensor.shape属性は戻らないTensorオブジェクト: If you need aTensor使い方 Thetf.rankまたはtf.shapeこの違いは微妙ですが、グラフを構築する際(後で)重要かもしれません。

tf.rank(rank_4_tensor)
<tf.Tensor: shape=(), dtype=int32, numpy=4>
tf.shape(rank_4_tensor)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([3, 2, 4, 5], dtype=int32)>

しばしば、軸はグローバルからローカルに順序づけられています: バッチ軸は最初に、次に空間的次元、そしてそれぞれの場所の特徴が最後です。

Typical axis order

Keep track of what each axis is. A 4-axis tensor might be: Batch, Width, Height, Features

Keep track of what each axis is. A 4-axis tensor might be: Batch, Width, Height, Features


インデックス

単軸インデックス

TensorFlow は、Python 標準のインデックスルールに従います。Python でリストまたは文字列をインデックスする, and the basic rules for NumPy indexing. , and the basic rules for NumPy indexing.

  • インデックスは0から始まる。
  • ネガティブな指数は、終わりから後ろを数える。
  • colons, :, are used for slices: start:stop:step は、


rank_1_tensor = tf.constant([0, 1, 1, 2, 3, 5, 8, 13, 21, 34])
print(rank_1_tensor.numpy())
[ 0  1  1  2  3  5  8 13 21 34]

スカラルでインデックスすると、軸が削除されます:

print("First:", rank_1_tensor[0].numpy())
print("Second:", rank_1_tensor[1].numpy())
print("Last:", rank_1_tensor[-1].numpy())
First: 0
Second: 1
Last: 34

インデックス A:slice keeps the axis(スライスは軸を保持する)

print("Everything:", rank_1_tensor[:].numpy())
print("Before 4:", rank_1_tensor[:4].numpy())
print("From 4 to the end:", rank_1_tensor[4:].numpy())
print("From 2, before 7:", rank_1_tensor[2:7].numpy())
print("Every other item:", rank_1_tensor[::2].numpy())
print("Reversed:", rank_1_tensor[::-1].numpy())
Everything: [ 0  1  1  2  3  5  8 13 21 34]
Before 4: [0 1 1 2]
From 4 to the end: [ 3  5  8 13 21 34]
From 2, before 7: [1 2 3 5 8]
Every other item: [ 0  1  3  8 21]
Reversed: [34 21 13  8  5  3  2  1  1  0]

多軸インデックス

より高いランクのテンサーは、複数のインデックスを通過することによってインデックスされます。

単軸の場合と同じルールは、それぞれの軸に独立して適用されます。

print(rank_2_tensor.numpy())
[[1. 2.]
 [3. 4.]
 [5. 6.]]

各インデックスに整数を渡すと、結果はスカラーになります。

# Pull out a single value from a 2-rank tensor
print(rank_2_tensor[1, 1].numpy())

You can index using any combination of integers and slices:

# Get row and column tensors
print("Second row:", rank_2_tensor[1, :].numpy())
print("Second column:", rank_2_tensor[:, 1].numpy())
print("Last row:", rank_2_tensor[-1, :].numpy())
print("First item in last column:", rank_2_tensor[0, -1].numpy())
print("Skip the first row:")
print(rank_2_tensor[1:, :].numpy(), "\n")
Second row: [3. 4.]
Second column: [2. 4. 6.]
Last row: [5. 6.]
First item in last column: 2.0
Skip the first row:
[[3. 4.]
 [5. 6.]]

Here is an example with a 3-axis tensor:

print(rank_3_tensor[:, :, 4])
tf.Tensor(
[[ 4  9]
 [14 19]
 [24 29]], shape=(3, 2), dtype=int32)


Selecting the last feature across all locations in each example in the batch

A 3x2x5 tensor with all the values at the index-4 of the last axis selected.

The selected values packed into a 2-axis tensor.

A 3x2x5 tensor with all the values at the index-4 of the last axis selected.

The selected values packed into a 2-axis tensor.

READ THETensor Slicing ガイドあなたがあなたのテンサーの個々の要素を操作するためにインデックスを適用する方法を学ぶために。

形を操る

テナントを再構成することは非常に役に立つ。

# Shape returns a `TensorShape` object that shows the size along each axis
x = tf.constant([[1], [2], [3]])
print(x.shape)
(3, 1)
# You can convert this object into a Python list, too
print(x.shape.as_list())
[3, 1]

あなたは新しい形にテナーを再構成することができます。tf.reshape操作は迅速かつ安価で、潜在的なデータを複製する必要はありません。

# You can reshape a tensor to a new shape.
# Note that you're passing in a list
reshaped = tf.reshape(x, [1, 3])
print(x.shape)
print(reshaped.shape)
(3, 1)
(1, 3)

データはメモリのレイアウトを維持し、要求された形状で新しいテンサーが作成され、同じデータを指します。TensorFlow は C スタイルの "row-major" メモリ配列を使用し、右端のインデックスを増加させることはメモリの 1 つのステップに対応します。

print(rank_3_tensor)
tf.Tensor(
[[[ 0  1  2  3  4]
  [ 5  6  7  8  9]]

 [[10 11 12 13 14]
  [15 16 17 18 19]]

 [[20 21 22 23 24]
  [25 26 27 28 29]]], shape=(3, 2, 5), dtype=int32)

テンサーを平らげると、メモリにどのような順序が置かれているかを見ることができます。

# A `-1` passed in the `shape` argument says "Whatever fits".
print(tf.reshape(rank_3_tensor, [-1]))
tf.Tensor(
[ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
 24 25 26 27 28 29], shape=(30,), dtype=int32)

通常、唯一の合理的な使用は、tf.reshape隣接する軸を結合または分割する(または追加/削除する)1Sです。

この 3x2x5 テンサーでは、3x2x5 または 3x(2x5) に再構成することは、2 つとも合理的なことです。

print(tf.reshape(rank_3_tensor, [3*2, 5]), "\n")
print(tf.reshape(rank_3_tensor, [3, -1]))
tf.Tensor(
[[ 0  1  2  3  4]
 [ 5  6  7  8  9]
 [10 11 12 13 14]
 [15 16 17 18 19]
 [20 21 22 23 24]
 [25 26 27 28 29]], shape=(6, 5), dtype=int32) 

tf.Tensor(
[[ 0  1  2  3  4  5  6  7  8  9]
 [10 11 12 13 14 15 16 17 18 19]
 [20 21 22 23 24 25 26 27 28 29]], shape=(3, 10), dtype=int32)



Some good reshapes.

A 3x2x5 tensor

The same data reshaped to (3x2)x5

The same data reshaped to 3x(2x5)

A 3x2x5 tensor

The same data reshaped to (3x2)x5

The same data reshaped to 3x(2x5)

リフォームは、同じ要素の合計数を持つ新しい形状のために「働く」が、軸の順序を尊重しない場合に役に立つことはありません。

エッセンシャルエイジング intf.reshape働かない、必要tf.transposeそのために。

# Bad examples: don't do this

# You can't reorder axes with reshape.
print(tf.reshape(rank_3_tensor, [2, 3, 5]), "\n") 

# This is a mess
print(tf.reshape(rank_3_tensor, [5, 6]), "\n")

# This doesn't work at all
try:
  tf.reshape(rank_3_tensor, [7, -1])
except Exception as e:
  print(f"{type(e).__name__}: {e}")


tf.Tensor(
[[[ 0  1  2  3  4]
  [ 5  6  7  8  9]
  [10 11 12 13 14]]

 [[15 16 17 18 19]
  [20 21 22 23 24]
  [25 26 27 28 29]]], shape=(2, 3, 5), dtype=int32) 

tf.Tensor(
[[ 0  1  2  3  4  5]
 [ 6  7  8  9 10 11]
 [12 13 14 15 16 17]
 [18 19 20 21 22 23]
 [24 25 26 27 28 29]], shape=(5, 6), dtype=int32) 

InvalidArgumentError: { {function_node __wrapped__Reshape_device_/job:localhost/replica:0/task:0/device:GPU:0} } Input to reshape is a tensor with 30 values, but the requested shape requires a multiple of 7 [Op:Reshape]



Some bad reshapes.

You can't reorder axes, use tf.transpose for that

Anything that mixes the slices of data together is probably wrong.

The new shape must fit exactly.

You can't reorder axes, use tf.transpose for that

Anything that mixes the slices of data together is probably wrong.

The new shape must fit exactly.

完全に指定されていない形状を横切ることもできます。None(軸の長さは不明)または全体の形がNone(テナントのレベルは不明)。

tf.RaggedTensor を除き、このような形態は TensorFlow の象徴的なグラフビルディング API の文脈でのみ発生します。

  • TF機能
  • ハード機能の API


MORE ONDTYPE

DTYPE

チェックするAtf.Tensorデータの種類は、The data type uses theTensor.dtype財産

A を作る時tf.TensorPython オブジェクトから、データ型をオプションで指定できます。

できない場合は、TensorFlow がデータを表すデータ型を選択します。tf.int32Python Floating Point Numbers は、tf.float32でないと、TensorFlow は、Array に変換するときに NumPy が使用する同じルールを使用します。

タイプからタイプに変えることができます。

the_f64_tensor = tf.constant([2.2, 3.3, 4.4], dtype=tf.float64)
the_f16_tensor = tf.cast(the_f64_tensor, dtype=tf.float16)
# Now, cast to an uint8 and lose the decimal precision
the_u8_tensor = tf.cast(the_f16_tensor, dtype=tf.uint8)
print(the_u8_tensor)
tf.Tensor([2 3 4], shape=(3,), dtype=uint8)


放送

放送は、借用されたコンセプト。NumPy で同等の機能要するに、一定の条件下で、より小さいテンサーは、それらに組み合わされた操作を実行するときに、より大きなテンサーに適合するために自動的に「伸ばされる」のです。

最も単純で最も一般的なケースは、スカラルにテネサーを倍増または追加しようとする場合です. In that case, the scalar is broadcast to be the same shape as the other argument.

x = tf.constant([1, 2, 3])

y = tf.constant(2)
z = tf.constant([2, 2, 2])
# All of these are the same computation
print(tf.multiply(x, 2))
print(x * y)
print(x * z)
tf.Tensor([2 4 6], shape=(3,), dtype=int32)
tf.Tensor([2 4 6], shape=(3,), dtype=int32)
tf.Tensor([2 4 6], shape=(3,), dtype=int32)

同様に、長さ 1 の軸は他の論点と一致するように伸ばすことができます. Both arguments can be stretched in the same calculation.

この場合、3x1 マトリックスは 1x4 マトリックスで元素的に倍増して 3x4 マトリックスを生成します。[4].

# These are the same computations
x = tf.reshape(x,[3,1])
y = tf.range(1, 5)
print(x, "\n")
print(y, "\n")
print(tf.multiply(x, y))
tf.Tensor(
[[1]
 [2]
 [3]], shape=(3, 1), dtype=int32) 

tf.Tensor([1 2 3 4], shape=(4,), dtype=int32) 

tf.Tensor(
[[ 1  2  3  4]
 [ 2  4  6  8]
 [ 3  6  9 12]], shape=(3, 4), dtype=int32)

A broadcasted add: a [3, 1] times a [1, 4] gives a [3,4]

Adding a 3x1 matrix to a 4x1 matrix results in a 3x4 matrix

Adding a 3x1 matrix to a 4x1 matrix results in a 3x4 matrix

以下は、放送なしで同じ操作です。

x_stretch = tf.constant([[1, 1, 1, 1],
                         [2, 2, 2, 2],
                         [3, 3, 3, 3]])

y_stretch = tf.constant([[1, 2, 3, 4],
                         [1, 2, 3, 4],
                         [1, 2, 3, 4]])

print(x_stretch * y_stretch)  # Again, operator overloading
tf.Tensor(
[[ 1  2  3  4]
 [ 2  4  6  8]
 [ 3  6  9 12]], shape=(3, 4), dtype=int32)

ほとんどの場合、放送は時間と空間の両方で効率的であるため、放送操作はメモリの拡張テンサーを決して現実化しない。

You see what broadcasting looks like 使っているように見えるtf.broadcast_to.

print(tf.broadcast_to(tf.constant([1, 2, 3]), [3, 3]))
tf.Tensor(
[[1 2 3]
 [1 2 3]
 [1 2 3]], shape=(3, 3), dtype=int32)

たとえば、数学とは異なり、broadcast_toメモリを保存するために特別なことはしません. ここで、あなたはテネザーを現実化しています。

さらに複雑になるかもしれない。このセクションジェイク・ヴァンダープラスの本Python Data Science ハンドブックより多くの放送トリックを表示します(またNumPyで)。


tf.convert_to_tensor に変換する

ほとんどのOPSは、tf.matmulそしてtf.reshapeタイトル: Arguments of Classtf.Tensorしかし、上記の場合では、Tensorsのように形作られたPythonオブジェクトが受け入れられます。

ほとんど、しかしすべてではないOPSコールconvert_to_tensor変換のレジストリがあり、ほとんどのオブジェクトクラスは、NumPyのndarrayで、TensorShapePythonのリストや、tf.Variableすべて自動的に変換します。

こちらtf.register_tensor_conversion_function詳細については、あなたが独自のタイプを持っている場合は、自動的にテンサーに変換したいと思います。


タンザニア Tensors

A tensor with variable numbers of elements along some axis is called "ragged". 要素の変数をいくつかの軸に沿って「ragged」と呼ばれます。tf.ragged.RaggedTensorRAGED DATA について

たとえば、 This は、通常の tensor として表すことはできません。

tf.RaggedTensor, shape: [4, None]

A 2-axis ragged tensor, each row can have a different length.

A 2-axis ragged tensor, each row can have a different length.

ragged_list = [
    [0, 1, 2, 3],
    [4, 5],
    [6, 7, 8],
    [9]]
try:
  tensor = tf.constant(ragged_list)
except Exception as e:
  print(f"{type(e).__name__}: {e}")
ValueError: Can't convert non-rectangular Python sequence to Tensor.

代わりにAを作る。tf.RaggedTensor利用tf.ragged.constant:

ragged_tensor = tf.ragged.constant(ragged_list)
print(ragged_tensor)
<tf.RaggedTensor [[0, 1, 2, 3], [4, 5], [6, 7, 8], [9]]>

形のAtf.RaggedTensor未知の長さを持ついくつかの軸が含まれます:

print(ragged_tensor.shape)
(4, None)


STRING TENSOR

tf.stringA はdtype, つまり、 tensors で strings (variable-length byte array) としてデータを表すことができます。

文字列は原子であり、Python文字列と同じようにインデックスすることはできません。 文字列の長さは tensor の軸の一つではありません。tf.strings機能を操作するために

以下は、スカラール・ストリップ・テンサーです。

# Tensors can be strings, too here is a scalar string.
scalar_string_tensor = tf.constant("Gray wolf")
print(scalar_string_tensor)
tf.Tensor(b'Gray wolf', shape=(), dtype=string)

A vector of strings とは、

A vector of strings, shape: [3,]

The string length is not one of the tensor's axes.

The string length is not one of the tensor's axes.


# If you have three string tensors of different lengths, this is OK.
tensor_of_strings = tf.constant(["Gray wolf",
                                 "Quick brown fox",
                                 "Lazy dog"])
# Note that the shape is (3,). The string length is not included.
print(tensor_of_strings)
tf.Tensor([b'Gray wolf' b'Quick brown fox' b'Lazy dog'], shape=(3,), dtype=string)

上記の印刷では、bprefix は、tf.stringdtype は unicode 文字列ではなく、byte 文字列です。Unicode チュートリアルTensorFlow で Unicode テキストを使用する方法

ユニコード文字を渡すと、Utf-8で暗号化されます。

tf.constant("🥳👍")
<tf.Tensor: shape=(), dtype=string, numpy=b'\xf0\x9f\xa5\xb3\xf0\x9f\x91\x8d'>

いくつかの基本的な関数は、 string で見つけることができます。tf.stringsを含むtf.strings.split.

# You can use split to split a string into a set of tensors
print(tf.strings.split(scalar_string_tensor, sep=" "))
tf.Tensor([b'Gray' b'wolf'], shape=(2,), dtype=string)
# ...but it turns into a `RaggedTensor` if you split up a tensor of strings,
# as each string might be split into a different number of parts.
print(tf.strings.split(tensor_of_strings))
<tf.RaggedTensor [[b'Gray', b'wolf'], [b'Quick', b'brown', b'fox'], [b'Lazy', b'dog']]>

Three strings split, shape: [3, None]

Splitting multiple strings returns a tf.RaggedTensor

Splitting multiple strings returns a tf.RaggedTensor

そしてtf.strings.to_number:

text = tf.constant("1 10 100")
print(tf.strings.to_number(tf.strings.split(text, " ")))
tf.Tensor([  1.  10. 100.], shape=(3,), dtype=float32)

たとえ使えなくてもtf.cast文字列のテンサーを数字に変換するには、それをバイトに変換し、次に数字に変換できます。

byte_strings = tf.strings.bytes_split(tf.constant("Duck"))
byte_ints = tf.io.decode_raw(tf.constant("Duck"), tf.uint8)
print("Byte strings:", byte_strings)
print("Bytes:", byte_ints)
Byte strings: tf.Tensor([b'D' b'u' b'c' b'k'], shape=(4,), dtype=string)
Bytes: tf.Tensor([ 68 117  99 107], shape=(4,), dtype=uint8)
# Or split it up as unicode and then decode it
unicode_bytes = tf.constant("アヒル 🦆")
unicode_char_bytes = tf.strings.unicode_split(unicode_bytes, "UTF-8")
unicode_values = tf.strings.unicode_decode(unicode_bytes, "UTF-8")

print("\nUnicode bytes:", unicode_bytes)
print("\nUnicode chars:", unicode_char_bytes)
print("\nUnicode values:", unicode_values)
Unicode bytes: tf.Tensor(b'\xe3\x82\xa2\xe3\x83\x92\xe3\x83\xab \xf0\x9f\xa6\x86', shape=(), dtype=string)

Unicode chars: tf.Tensor([b'\xe3\x82\xa2' b'\xe3\x83\x92' b'\xe3\x83\xab' b' ' b'\xf0\x9f\xa6\x86'], shape=(5,), dtype=string)

Unicode values: tf.Tensor([ 12450  12498  12523     32 129414], shape=(5,), dtype=int32)

THEtf.stringdtype は TensorFlow のすべての原始バイトデータに使用されます。tf.ioモジュールには、画像の解読やCSVの解析を含むデータをバイトから変換する機能が含まれています。

緊張の節約

たまに、データは非常に広い埋め込みスペースのように希少です。tf.sparse.SparseTensorスパースデータを効率的に保存するための関連操作。

tf.SparseTensor, shape: [3, 4]

An 3x4 grid, with values in only two of the cells.

An 3x4 grid, with values in only two of the cells.

# Sparse tensors store values by index in a memory-efficient manner
sparse_tensor = tf.sparse.SparseTensor(indices=[[0, 0], [1, 2]],
                                       values=[1, 2],
                                       dense_shape=[3, 4])
print(sparse_tensor, "\n")

# You can convert sparse tensors to dense
print(tf.sparse.to_dense(sparse_tensor))
SparseTensor(indices=tf.Tensor(
[[0 0]
 [1 2]], shape=(2, 2), dtype=int64), values=tf.Tensor([1 2], shape=(2,), dtype=int32), dense_shape=tf.Tensor([3 4], shape=(2,), dtype=int64)) 

tf.Tensor(
[[1 0 0 0]
 [0 0 2 0]
 [0 0 0 0]], shape=(3, 4), dtype=int32)

最初はTensorFlowのウェブサイトに掲載され、この記事はここに新しいタイトルの下に表示され、CC BY 4.0でライセンスを受けています。

Originally published on the TensorFlow website,This article appears here under a new title and is licensed under. この記事はここに新しいタイトルで表示され、ライセンスを受けています。CC BY 4.0.コードサンプルは、Shared Under theApache 2.0 License.

TensorFlowのサイト


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks