postgresql - Postgres datetime and bigint handling -


we development team have run strange bug in jira. trying clean mess bug want update dates of our springs in jira database.

we using windows server , have postgres installed on it.

i have found relevant table , when write

select * "ao_60db71_sprint" 

find this:


closed; complete_date; end_date; id; name; rapid_view_id; sequence; started; start_date

t;1433318043661;1433226900000;1;"sprint 1";1;;t;1432190100102 t;1433924067416;-61680144720000;2;"sprint 2";1;;t;-61681095120000 t;1434528978422;-61679885580000;3;"sprint 3";1;;t;-61680144780000 t;1435130684508;-61678935480000;4;"sprint 4";1;;t;-61679540276038 t;1435735227248;-61678337460000;5;"sprint 5";1;;t;-61679115060000 t;1436340875991;-61677749880000;6;"sprint 6";1;;t;-61678354663584 t;1436944702756;-61677125820000;7;"sprint 7";1;;t;-61677730634396 t;1437549239766;-61676517000000;8;"sprint 8";1;;t;-61677121774120 t;1438154558709;-61675915920000;9;"sprint 9";1;;t;-61676520745914 t;1438764063437;-61675313460000;10;"sprint 10";1;;t;-61675918235812 t;1439366509383;-61674701940000;11;"sprint 11";1;;t;-61675306752010 t;1439970303684;-61674080220000;12;"sprint 12";1;;t;-61674703008615 f;;1440602460000;13;"sprint 13";1;;t;1439979707567


the interesting fields here date values stored bigints. few of these values positive , others negative.

when @ dates represent writing

select to_timestamp("start_date" / 1000) "ao_60db71_sprint" 

"2015-05-21 08:35:00+02"
"0015-05-28 11:28:00+01"
"0015-06-08 11:27:00+01"
"0015-06-15 11:22:04+01"
"0015-06-20 09:29:00+01"
"0015-06-29 04:42:17+01"
"0015-07-06 10:02:46+01"
"0015-07-13 11:10:26+01"
"0015-07-20 10:07:35+01"
"0015-07-27 09:29:25+01"
"0015-08-03 11:20:48+01"
"0015-08-10 11:03:12+01"
"2015-08-19 12:21:47+02"

what want achieve update above column dates of year 0015 should update (bigint corresponding) year 2015.

my plan this:

select    "start_date",    extract(epoch interval '2000 years')*1000 + "start_date" "ao_60db71_sprint" 

but resulting datatype of second row double.

my questions in end are

  1. is safe make update insert double bigint column?
  2. if not, what's missing step in conversion?
  3. being totally novice postgres, how make update?
  4. do need make commit afterwards?

thanks in advance

the field updating seems harmless there little risk in performing update column.

you can self reference value start_date in update query. , can make use of where clause narrow down target rows.

conversions done using ::type notation.

a query can want like:

update ao_60db71_sprint     set start_date = start_date + (extract(epoch interval '2000 years')*1000)::bigint     start_date < 0; 

on success should return update <count> , not require commit.


Comments

Popular posts from this blog

html - Firefox flex bug applied to buttons? -

html - Missing border-right in select on Firefox -

python - build a suggestions list using fuzzywuzzy -