Deployment Fails in a Sagemaker Environment

Asked 2 years ago, Updated 2 years ago, 60 views

Prerequisites

We are creating a TTS system using acoustic models with aws' sagemaker.

What do you want to do

I would like to deploy an acoustic model with sagemaker.

Problems/Error Messages you are experiencing

The following error occurs repeatedly during deployment:

Please specify --force/-option to overwrite the model archive output file.
See-h / --help for more details. / .sagemaker /mms /models /model
ERROR - %salready exists.

Deployment Code

 from sagemaker import get_execution_role
from sagemaker.pytorch.model import PyTorchModel

role=get_execution_role()

pytorch_model=PyTorchModel(model_data='s3://sagemaker-alterly/model.tar.gz', 
                             role=role,
                             framework_version = "1.3.1",
                             py_version="py3",
                             entry_point='inference.py')

predictor=pytorch_model.deploy(instance_type='ml.t2.2xlarge', initial_instance_count=1)

※ Entry_point inference code: iference.py

importos
import time
import torch
import pyopenjtalk
from espnet2.bin.tts_inference import Text2Speech
import matplotlib.pyplot asplt
from espnet2.tasks.ts import TTSTask
from espnet2.text.token_id_converter import TokenIDConverter
import numpy as np

import argparse
import text_processing as texp
importos

import boto3

prosodic=True

model_dir="model/"
vocoder_dir="vocoder/"
CONTENT_TYPE="text/plain"

train_config = "model/config.yaml"
model_file="model/50epoch.pth"
# train_config=""
# model_file="

vocoder_tag = "parallel_wavegan/jsut_hifigan.v1"
# Specify a vocoder
vocoder_config = "vocoder/config.yaml"
vocoder_file="vocoder/50epoch.pth"


def model_fn(model_dir):
    print(model_dir+"config.yaml")
    print(model_dir+"100epoch.pth")
    model=Text2Speech.from_pretrained(
        train_config=model_dir+"config.yaml",
        model_file=model_dir+"100epoch.pth",
        vocoder_tag = vocoder_tag,
        device="cpu",
        speed_control_alpha = 1.0,
        noise_scale = 0.333,
        noise_scale_dur = 0.333,
    )

    return model


default_fn(request_body, content_type=CONTENT_TYPE):
    input_data="Aiueo"
    return input_data


default_fn(input_data,model):
    import torch
    importos
    import numpy as np

    x = "Demo Text"

    # model,train_args = TTSTask.build_model_from_file(
    #         train_config, model_file, "cuda"
    #        )

    token_id_converter=TokenIDConverter(
        token_list = model.train_args.token_list,
        unk_symbol="<unk>",
    )

    text = x
    if prosodic:
        tokenens=texp.a2p(x)
        text_ints=token_id_converter.tokens2ids(tokens)
        text=np.array(text_ints)
    else:
        print("\npyopenjtalk_accent_with_pause results:")
        print(texp.text2yomi(x), "\n")

    # synthesis
    with torch.no_grad():
        start = time.time()
        data=model(text)
        wav = data ["wav" ]
        # print(text2speech.preprocess_fn("<dummy>", dict(text=x))["text"])
    rtf=(time.time()-start)/(len(wav)/model.fs)
    print(f"RTF={rtf:5f}")

    if notos.path.isdir("generated_wav"):
        os.madeirs("generated_wav")

    # let us listen to generated samples
    from IPython.display import display, Audio
    import numpy as np
    # display (Audio(wav.view(-1.cpu().numpy(), rate=text2speech.fs))
    # Audio (wav.view(-1.cpu().numpy(), rate=text2speech.fs)
    np_wav=wav.view(-1.cpu().numpy()

    fs = 48000
    print("Sampling Rate", fs, で.")
    from scipy.io.wavfile import write
    sample=fs
    t=np.linspace(0,1,samplerate)
    amplitude=np.iinfo(np.int16).max
    data=amplitude*np_wav/np.max(np.abs(np_wav))
    write("espnet/egs2/jsut/ts1/generated_wav/"+x+
          ".wav", sample, data.astype(np.int16))
    print("\n\n\n")

    # Connecting to a Bucket
    s3 = boto3.resource('s3')
    bucket=s3.Bucket('alterly-source')
    bucket.upload_file("espnet/egs2/jsut/ts1/generated_wav/"+
                       x + ".wav", "source / "+x + ".wav")

    x = "exit"


input_object = input_fn("Ayeo", "text/plain")
model=model_fn(model_dir)
prediction=predict_fn(input_object,model)

Run with the following command

!inference.py

Supplementary Information

The directory structure before compression is as follows:

Enter a description of the image here

I compress this into model.tar.gz and install it on s3.

As this is an urgent matter, I am consulting with other websites.
If there is any progress, I will share it with you.
https://teratail.com/questions/j8ux53rs8n7v2t

If anyone knows how to deal with this, I would appreciate it if you could let me know.

python python3 aws machine-learning

2022-09-30 16:31

1 Answers

I solved myself.
The values specified in framework_version and py_version in the code below have been modified and improved.

Wrong:

pytorch_model=PyTorchModel(model_data='s3://sagemaker-alterly/model.tar.gz',
role=role,
framework_version = "1.3.1",
py_version="py3",
entry_point='inference.py')

positive:

pytorch_model=PyTorchModel(model_data='s3://sagemaker-alterly/model.tar.gz',
role=role,
framework_version = "1.12",
py_version="py38",
entry_point='inference.py')

I apologize for the inconvenience caused to those who have received comments.
Thank you very much.


2022-09-30 16:31

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.