E53A Bigquery: Add empty lines (via synth). by yoshi-automation · Pull Request #8049 · googleapis/google-cloud-python · GitHub
[go: up one dir, main page]

Skip to content

Conversation

yoshi-automation
Copy link
Contributor

This PR was generated using Autosynth. 🌈

Here's the log from Synthtool:

synthtool > Executing /tmpfs/src/git/autosynth/working_repo/bigquery/synth.py.
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
Digest: sha256:3246adac900f4bdbd62920e80de2e5877380e44036b3feae13667ec255ebf5ec
Status: Image is up to date for googleapis/artman:latest
synthtool > Cloning googleapis.
synthtool > Running generator for google/cloud/bigquery/artman_bigquery_v2.yaml.
synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquery-v2.
synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/bigquery/v2/model_reference.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquery-v2/google/cloud/bigquery_v2/proto/model_reference.proto
synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/bigquery/v2/model.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquery-v2/google/cloud/bigquery_v2/proto/model.proto
synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/bigquery/v2/standard_sql.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquery-v2/google/cloud/bigquery_v2/proto/standard_sql.proto
synthtool > Placed proto files into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquery-v2/google/cloud/bigquery_v2/proto.
synthtool > Replaced '"""Attributes:' in google/cloud/bigquery_v2/proto/model_pb2.py.
synthtool > Replaced '[“”]' in google/cloud/bigquery_v2/proto/model_pb2.py.
Running session blacken
Creating virtualenv using python3.6 in /tmpfs/src/git/autosynth/working_repo/bigquery/.nox/blacken
pip install black
black docs google samples tests noxfile.py setup.py
reformatted /tmpfs/src/git/autosynth/working_repo/bigquery/google/cloud/bigquery_v2/gapic/enums.py
reformatted /tmpfs/src/git/autosynth/working_repo/bigquery/google/cloud/bigquery_v2/proto/model_pb2_grpc.py
reformatted /tmpfs/src/git/autosynth/working_repo/bigquery/google/cloud/bigquery_v2/proto/model_reference_pb2_grpc.py
reformatted /tmpfs/src/git/autosynth/working_repo/bigquery/google/cloud/bigquery_v2/proto/model_reference_pb2.py
reformatted /tmpfs/src/git/autosynth/working_repo/bigquery/google/cloud/bigquery_v2/proto/standard_sql_pb2_grpc.py
reformatted /tmpfs/src/git/autosynth/working_repo/bigquery/google/cloud/bigquery_v2/types.py
reformatted /tmpfs/src/git/autosynth/working_repo/bigquery/google/cloud/bigquery_v2/proto/standard_sql_pb2.py
reformatted /tmpfs/src/git/autosynth/working_repo/bigquery/google/cloud/bigquery_v2/proto/model_pb2.py
All done! ✨ 🍰 ✨
8 files reformatted, 82 files left unchanged.
Session blacken was successful.
synthtool > Cleaned up 0 temporary directories.
synthtool > Wrote metadata to synth.metadata.

@yoshi-automation yoshi-automation requested a review from a team May 21, 2019 12:13
@yoshi-automation yoshi-automation added the api: bigquery Issues related to the BigQuery API. label May 21, 2019
@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label May 21, 2019
@busunkim96 busunkim96 changed the title [CHANGE ME] Re-generated bigquery to pick up changes in the API or client library generator. Bigquery: Add empty lines (via synth). May 21, 2019
@busunkim96
Copy link
Contributor

Snippets tests are failing. @googleapis/api-bigquery

@tswast
Copy link
Contributor
tswast commented May 21, 2019

Test failure looks like a real one.

=================================== FAILURES ===================================
______________________________ test_model_samples ______________________________
capsys = <_pytest.capture.CaptureFixture object at 0x7faf4d5a0f90>
client = <google.cloud.bigquery.client.Client object at 0x7faf4dc06b90>
dataset_id = 'precise-truck-742.python_samples_20190521123855_f9e4b5df'
model_id = 'precise-truck-742.python_samples_20190521123855_f9e4b5df.7358590e7d5a413bbb52c37b095003a8'
    def test_model_samples(capsys, client, dataset_id, model_id):
        """Since creating a model is a long operation, test all model samples in
        the same test, following a typical end-to-end flow.
        """
>       get_model.get_model(client, model_id)
samples/tests/test_model_samples.py:25:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
samples/get_model.py:28: in get_model
    model = client.get_model(model_id)
google/cloud/bigquery/client.py:467: in get_model
    return Model.from_api_repr(api_response)
google/cloud/bigquery/model.py:282: in from_api_repr
    this._proto = json_format.ParseDict(resource, types.Model())
.nox/snippets-2-7/lib/python2.7/site-packages/google/protobuf/json_format.py:439: in ParseDict
    parser.ConvertMessage(js_dict, message)
.nox/snippets-2-7/lib/python2.7/site-packages/google/protobuf/json_format.py:470: in ConvertMessage
    self._ConvertFieldValuePair(value, message)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.protobuf.json_format._Parser object at 0x7faf4d5a0110>
js = {'creationTime': '1558442352137', 'etag': 'IAly/E8PvWDzQqfJoWcjCg==', 'featureColumns': [{'name': 'f1', 'type': {'typeKind': 'STRING'}}], 'labelColumns': [{'name': 'predicted_label', 'type': {'typeKind': 'FLOAT64'}}], ...}
message = model_reference {
  project_id: "precise-truck-742"
  dataset_id: "python_samp...abel_columns {
  name: "predicted_label"
  type {
    type_kind: FLOAT64
  }
}
    def _ConvertFieldValuePair(self, js, message):
      """Convert field value pairs into regular message.
      Args:
        js: A JSON object to convert the field value pairs.
        message: A regular protocol message to record the data.
      Raises:
        ParseError: In case of problems converting.
      """
      names = []
      message_descriptor = message.DESCRIPTOR
      fields_by_json_name = dict((f.json_name, f)
                                 for f in message_descriptor.fields)
      for name in js:
        try:
          field = fields_by_json_name.get(name, None)
          if not field:
            field = message_descriptor.fields_by_name.get(name, None)
          if not field and _VALID_EXTENSION_NAME.match(name):
            if not message_descriptor.is_extendable:
              raise ParseError('Message type {0} does not have extensions'.format(
                  message_descriptor.full_name))
            identifier = name[1:-1]  # strip [] brackets
            identifier = '.'.join(identifier.split('.')[:-1])
            # pylint: disable=protected-access
            field = message.Extensions._FindExtensionByName(identifier)
            # pylint: enable=protected-access
          if not field:
            if self.ignore_unknown_fields:
              continue
            raise ParseError(
                ('Message type "{0}" has no field named "{1}".\n'
                 ' Available Fields(except extensions): {2}').format(
                     message_descriptor.full_name, name,
                     [f.json_name for f in message_descriptor.fields]))
          if name in names:
            raise ParseError('Message type "{0}" should not have multiple '
                             '"{1}" fields.'.format(
                                 message.DESCRIPTOR.full_name, name))
          names.append(name)
          # Check no other oneof field is parsed.
          if field.containing_oneof is not None:
            oneof_name = field.containing_oneof.name
            if oneof_name in names:
              raise ParseError('Message type "{0}" should not have multiple '
                               '"{1}" oneof fields.'.format(
                                   message.DESCRIPTOR.full_name, oneof_name))
            names.append(oneof_name)
          value = js[name]
          if value is None:
            if (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE
                and field.message_type.full_name == 'google.protobuf.Value'):
              sub_message = getattr(message, field.name)
              sub_message.null_value = 0
            else:
              message.ClearField(field.name)
            continue
          # Parse field value.
          if _IsMapEntry(field):
            message.ClearField(field.name)
            self._ConvertMapFieldValue(value, message, field)
          elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED:
            message.ClearField(field.name)
            if not isinstance(value, list):
              raise ParseError('repeated field {0} must be in [] which is '
                               '{1}.'.format(name, value))
            if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
              # Repeated message field.
              for item in value:
                sub_message = getattr(message, field.name).add()
                # None is a null_value in Value.
                if (item is None and
                    sub_message.DESCRIPTOR.full_name != 'google.protobuf.Value'):
                  raise ParseError('null is not allowed to be used as an element'
                                   ' in a repeated field.')
                self.ConvertMessage(item, sub_message)
            else:
              # Repeated scalar field.
              for item in value:
                if item is None:
                  raise ParseError('null is not allowed to be used as an element'
                                   ' in a repeated field.')
                getattr(message, field.name).append(
                    _ConvertScalarFieldValue(item, field))
          elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
            if field.is_extension:
              sub_message = message.Extensions[field]
            else:
              sub_message = getattr(message, field.name)
            sub_message.SetInParent()
            self.ConvertMessage(value, sub_message)
          else:
            setattr(message, field.name, _ConvertScalarFieldValue(value, field))
        except ParseError as e:
          if field and field.containing_oneof is None:
>           raise ParseError('Failed to parse {0} field: {1}'.format(name, e))
E           ParseError: Failed to parse trainingRuns field: Failed to parse trainingOptions field: Message type "google.cloud.bigquery.v2.Model.TrainingRun.TrainingOptions" has no field named "optimizationStrategy".
E            Available Fields(except extensions): ['maxIterations', 'lossType', 'learnRate', 'l1Regularization', 'l2Regularization', 'minRelativeProgress', 'warmStart', 'earlyStop', 'inputLabelColumns', 'dataSplitMethod', 'dataSplitEvalFraction', 'dataSplitColumn', 'learnRateStrategy', 'initialLearnRate', 'labelClassWeights', 'distanceType', 'numClusters']

@tswast tswast added the do not merge Indicates a pull request not ready for merge, due to either quality or timing. label May 21, 2019
@tswast
Copy link
Contributor
tswast commented May 21, 2019

Fix pending in #8083

@tswast tswast removed the do not merge Indicates a pull request not ready for merge, due to either quality or timing. label May 21, 2019
@tswast tswast merged commit 23ba7ac into master May 21, 2019
@tseaver tseaver deleted the autosynth-bigquery branch May 24, 2019 17:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigquery Issues related to the BigQuery API. cla: yes This human has signed the Contributor License Agreement. codegen
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0