Searched full:training (Results 1 – 25 of 4220) sorted by relevance
12345678910>>...169
4464 name: "training/SGD/iter"4489 s: "training/SGD/iter"4494 name: "training/SGD/bn2a_branch1/beta/momentum"4522 s: "training/SGD/bn2a_branch1/beta/momentum"4527 name: "training/SGD/bn2a_branch1/gamma/momentum"4555 s: "training/SGD/bn2a_branch1/gamma/momentum"4560 name: "training/SGD/bn2a_branch2a/beta/momentum"4588 s: "training/SGD/bn2a_branch2a/beta/momentum"4593 name: "training/SGD/bn2a_branch2a/gamma/momentum"4621 s: "training/SGD/bn2a_branch2a/gamma/momentum"[all …]
6751 name: "training/LossScaleOptimizer/truediv"6798 …name: "training/LossScaleOptimizer/gradients/loss_1/res4c_branch2c/kernel/Regularizer/Square_grad/…6986 …name: "training/LossScaleOptimizer/gradients/loss_1/res4c_branch2b/kernel/Regularizer/Square_grad/…7140 …name: "training/LossScaleOptimizer/gradients/loss_1/res4c_branch2a/kernel/Regularizer/Square_grad/…7340 …name: "training/LossScaleOptimizer/gradients/loss_1/res2a_branch2c/kernel/Regularizer/Square_grad/…7845 …name: "training/LossScaleOptimizer/gradients/loss_1/res4a_branch1/kernel/Regularizer/Square_grad/M…8165 …name: "training/LossScaleOptimizer/gradients/loss_1/res4a_branch2a/kernel/Regularizer/Square_grad/…8356 …name: "training/LossScaleOptimizer/gradients/loss_1/res3d_branch2c/kernel/Regularizer/Square_grad/…8510 …name: "training/LossScaleOptimizer/gradients/loss_1/res3d_branch2b/kernel/Regularizer/Square_grad/…8664 …name: "training/LossScaleOptimizer/gradients/loss_1/res3d_branch2a/kernel/Regularizer/Square_grad/…[all …]
75 <h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">…85 <p class="firstline">Creates a training or a batch prediction job.</p>139 <pre>Creates a training or a batch prediction job.146 { # Represents a training or prediction job.180 …training job. When using the gcloud command to submit your training job, you can specify the input…181 …"args": [ # Optional. Command-line arguments passed to the training application when it …184 …tform Training to enable [interactive shell access](https://cloud.google.com/ai-platform/training/…185 …training job, instead of using Google's default encryption. If this is set, then all resource…186 …e customer-managed encryption key used to protect a resource, such as a training job. It has the f…188 …training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) Set `evalu…[all …]
82 …r to sending it queries. See the [training documentation](https://cloud.google.com/dialogflow/cx/d…85 …r to sending it queries. See the [training documentation](https://cloud.google.com/dialogflow/cx/d…97 …r to sending it queries. See the [training documentation](https://cloud.google.com/dialogflow/cx/d…106 …r to sending it queries. See the [training documentation](https://cloud.google.com/dialogflow/cx/d…113 …or the Dialogflow API to use to match user input to an intent by adding training phrases (i.e., ex…116 …nt creation. Adding training phrases to fallback intent is useful in the case of requests that are…124 …Required. The unique identifier of the parameter. This field is used by training phrases to annota…130 …"trainingPhrases": [ # The collection of training phrases the agent is trained on to ide…132 … "id": "A String", # Output only. The unique identifier of the training phrase.133 …training phrase parts. The parts are concatenated in order to form the training phrase. Note: The …[all …]
125 "bestTrialId": "A String", # The best trial_id across all training runs.173 …"trainingRuns": [ # Output only. Information for all training runs in increasing order o…174 { # Information about a single training query run for the model.175 …ains references to the training and evaluation data tables that were used to train the model. # Da…181 "trainingTable": { # Table reference of the training data after split.187 …training data or just the eval data based on whether eval data was used during training. These are…215 …_series_id_column specified during ARIMA model training. Only present when time_series_id_column t…216 …series_id_columns specified during ARIMA model training. Only present when time_series_id_columns …268 …"count": "A String", # Count of training data rows that were assigned to this …275 …"count": "A String", # The count of training samples matching the category wit…[all …]
16 """Support for training models.18 See the [Training](https://tensorflow.org/api_guides/python/train) guide.26 from tensorflow.python.training.adadelta import AdadeltaOptimizer27 from tensorflow.python.training.adagrad import AdagradOptimizer28 from tensorflow.python.training.adagrad_da import AdagradDAOptimizer29 from tensorflow.python.training.proximal_adagrad import ProximalAdagradOptimizer30 from tensorflow.python.training.adam import AdamOptimizer31 from tensorflow.python.training.ftrl import FtrlOptimizer32 from tensorflow.python.training.experimental.loss_scale_optimizer import MixedPrecisionLossScaleOpt…33 from tensorflow.python.training.experimental.mixed_precision import enable_mixed_precision_graph_re…[all …]
69 def call(self, input_tensor, training=False): argument71 x = self.bn2a(x, training=training)75 x = self.bn2b(x, training=training)79 x = self.bn2c(x, training=training)144 def call(self, input_tensor, training=False): argument146 x = self.bn2a(x, training=training)150 x = self.bn2b(x, training=training)154 x = self.bn2c(x, training=training)157 shortcut = self.bn_shortcut(shortcut, training=training)293 def call(self, inputs, training=True, intermediates_dict=None): argument[all …]
34 "tensorflow.python.keras.engine.training")44 call_fn: tf.function that takes layer inputs (and possibly a training arg),46 default_training_value: Default value of the training kwarg. If `None`, the98 """Returns whether this layer or any of its children uses the training arg."""132 """Decorate call and optionally adds training argument.134 If a layer expects a training argument, this function ensures that 'training'135 is present in the layer args or kwonly args, with the default training value.140 expects_training_arg: Whether to include 'training' argument.141 default_training_value: Default value of the training kwarg to include in146 function that calls `wrapped_call` and sets the training arg,[all …]
48 def test_forward(self, device, dtype, module_info, training): argument51 requires_grad=False, training=training)65 m.train(training)87 def test_factory_kwargs(self, device, dtype, module_info, training): argument90 requires_grad=False, training=training)103 m.train(training)130 m.train(training)139 m.train(training)144 def test_multiple_device_transfer(self, device, dtype, module_info, training): argument147 … requires_grad=False, training=training)[all …]
214 * The Cloud Storage location where the training data is to be217 * `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>`219 * All training input data is written into that directory.222 * format to support sharded data. e.g.: "gs://.../training-*.jsonl"225 …* "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AI…241 * The Cloud Storage location where the training data is to be244 * `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>`246 * All training input data is written into that directory.249 * format to support sharded data. e.g.: "gs://.../training-*.jsonl"252 …* "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AI…[all …]
25 * Specifies Vertex AI owned input data to be used for training, and446 * The Cloud Storage location where the training data is to be449 * `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>`451 * All training input data is written into that directory.454 * format to support sharded data. e.g.: "gs://.../training-*.jsonl"457 …* "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AI…476 * The Cloud Storage location where the training data is to be479 * `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>`481 * All training input data is written into that directory.484 * format to support sharded data. e.g.: "gs://.../training-*.jsonl"[all …]
25 * Specifies Vertex AI owned input data to be used for training, and448 * The Cloud Storage location where the training data is to be451 * `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>`453 * All training input data is written into that directory.456 * format to support sharded data. e.g.: "gs://.../training-*.jsonl"459 …* "gcs_destination/dataset-<dataset-id>-<annotation-type>-<time>/training-*.${AI…478 * The Cloud Storage location where the training data is to be481 * `dataset-<dataset-id>-<annotation-type>-<timestamp-of-training-call>`483 * All training input data is written into that directory.486 * format to support sharded data. e.g.: "gs://.../training-*.jsonl"[all …]
37 // The TrainingPipeline orchestrates tasks associated with training a Model. It38 // always executes the training task, and optionally may also39 // export data from Vertex AI's Dataset which becomes the training input,54 // Specifies Vertex AI owned input data that may be used for training the66 // training task which is responsible for producing the model artifact, and75 // Required. The training task's parameter(s), as specified in the84 // about the training task. While the pipeline is running this information is101 // training task either uploads the Model without a need of this information,102 // or that training task does not support uploading a Model as part of the167 // Specifies Vertex AI owned input data to be used for training, and[all …]
37 // The TrainingPipeline orchestrates tasks associated with training a Model. It38 // always executes the training task, and optionally may also39 // export data from Vertex AI's Dataset which becomes the training input,54 // Specifies Vertex AI owned input data that may be used for training the66 // training task which is responsible for producing the model artifact, and75 // Required. The training task's parameter(s), as specified in the84 // about the training task. While the pipeline is running this information is101 // training task either uploads the Model without a need of this information,102 // or that training task does not support uploading a Model as part of the166 // Specifies Vertex AI owned input data to be used for training, and[all …]
27 * to an intent by adding training phrases (i.e., examples of user input) to97 * Output only. The unique identifier of the training phrase.109 * Output only. The unique identifier of the training phrase.122 * Required. The ordered list of training phrase parts.123 * The parts are concatenated in order to form the training phrase.124 * Note: The API does not automatically annotate training phrases like the127 * training phrase is well formatted when the parts are concatenated.128 * If the training phrase does not need to be annotated with parameters,132 * If you want to annotate the training phrase, you must create multiple149 * Required. The ordered list of training phrase parts.[all …]
27 * to an intent by adding training phrases (i.e., examples of user input) to97 * Output only. The unique identifier of the training phrase.109 * Output only. The unique identifier of the training phrase.122 * Required. The ordered list of training phrase parts.123 * The parts are concatenated in order to form the training phrase.124 * Note: The API does not automatically annotate training phrases like the127 * training phrase is well formatted when the parts are concatenated.128 * If the training phrase does not need to be annotated with parameters,132 * If you want to annotate the training phrase, you must create multiple148 * Required. The ordered list of training phrase parts.[all …]
6 training processes on each of the training nodes.12 The utility can be used for single-node distributed training, in which one or14 CPU training or GPU training. If the utility is used for GPU training,16 well-improved single-node training performance. It can also be used in17 multi-node distributed training, by spawning up multiple processes on each node18 for well-improved multi-node distributed training performance as well.23 In both cases of single-node distributed training or multi-node distributed24 training, this utility will launch the given number of processes per node25 (``--nproc-per-node``). If used for GPU training, this number needs to be less32 1. Single-Node multi-process distributed training[all …]
META-INF/ META-INF/MANIFEST.MF com/ com/android/ com/ ...
19 PreemptionCheckpointHandler reduces loss of training progress caused by21 training and avoid surfacing an error indistinguishable from application errors83 until training finishes.89 training program gracefully. For `tf.distribute.MultiWorkerMirroredStrategy`,91 customized `exit_fn` may facilitate the restart and smoothen the training94 coordinating script that starts up the training, in which they can configure97 training seamless.103 there is the option to utilize this gap time for training as much as possible121 * Automatically utilized the extended training period before save and exit129 training step, save a checkpoint, and exit the program as soon as we[all …]
40 // Training pipeline will infer the proper transformation based on the46 // Training pipeline will perform following transformation functions.59 // If invalid values is allowed, the training pipeline will create a61 // Otherwise, the training pipeline will discard the input row from66 // Training pipeline will perform following transformation functions.72 // * Categories that appear less than 5 times in the training dataset are79 // Training pipeline will perform following transformation functions.102 // If invalid values is allowed, the training pipeline will create a104 // Otherwise, the training pipeline will discard the input row from109 // Training pipeline will perform following transformation functions.[all …]