After purifying the data frame through sql context in spark, Write the code below and attach it to jdbc write.
df_2.createOrReplaceTempView("vms_status")
q = """SELECT
srv_name,
srv_serial,
groups_id,
item_id,
if(item_param is null, '', item_param) as item_param,
count(srv_serial) as recv_count,
min(item_time) as item_min_time,
max(item_time) as item_max_time,
int(round(avg(item_value),2)) as item_value,
NOW() as sdc_ins_time,
if (avg(item_value) is null, last(item_value), null) as item_value_str
FROM vms_status
GROUP BY srv_name, srv_serial, groups_id, item_id, item_param
"""
df_3 = sql_context.sql(q).withColumn("history_serial", expr("uuid()").cast("String"))
df_3.show(truncate=False)
jdbc_write(df_3, "append", table_name)
Show(), the data free of the above code result df_3, the UTC time of the current time is set and output. (Spark session utc has been set up) But the mssqlinserted value was saved as the current local time, which is UTC+9 hours. Where should I synchronize? I need a UTC now value.
python mysql mssql
I don't know if it's dusty, but most of them have problems because they don't set the DB server time.
Check which computer it is written based on, and check the query you wrote yourself. But if it's weird, why don't you look at the table now?
I think the default value is CURRENT_TIME, so you should just modify the DB time or modify the DB table.
P.S.
I've never played Spark before, so I'm not sure about Syntax.
There is a SELECT keyword in the query statement, is it right to write it like that? I'm not sure, but when DB is inserted, it usually uses the INSERT keyword to ask because it feels strange.
578 Understanding How to Configure Google API Key
611 GDB gets version error when attempting to debug with the Presense SDK (IDE)
581 PHP ssh2_scp_send fails to send files as intended
618 Uncaught (inpromise) Error on Electron: An object could not be cloned
572 rails db:create error: Could not find mysql2-0.5.4 in any of the sources
© 2024 OneMinuteCode. All rights reserved.