postgresql - Very slow Postgres UPDATE on large table -


i have postgres 9.1.3 table 2.06 million rows after where y=1 per below (it has few ten thousand more rows total without where). trying add data empty field query this:

with b (     select z,            rank() on (order l, n, m, p) x            y=1 )  update set a.x = b.x b a.y=1   , b.z = a.z; 

this query runs hours , appears progress slowly. in fact, second time tried this, had power outage after query ran ~3 hours. after restoring power, analyzed table , got this:

info:  analyzing "consistent.master" info:  "master": scanned 30000 of 69354 pages, containing 903542 live rows , 153552 dead rows; 30000 rows in sample, 2294502 estimated total rows total query runtime: 60089 ms. 

is correct interpret query had barely progressed in hours?

i have done vacuum full , analyze before running long query.

the query within with takes 40 seconds.

all fields referenced above except a.x, , extension b.x, indexed: l, m, n, p, y, z.

this being run on laptop 8 gb ram, core i7 q720 1.6 ghz quad core processor, , windows 7 x64. running postgres 32 bit compatibility postgis 1.5.3. 64 bit postgis windows isn't available yet. (32 bit postgres means can't use more 2 gb ram in windows, doubt that's issue here.)

here's result of explain:

update on  (cost=727684.76..945437.01 rows=2032987 width=330)   cte b     ->  windowagg  (cost=491007.50..542482.47 rows=2058999 width=43)           ->  sort  (cost=491007.50..496155.00 rows=2058999 width=43)                 sort key: a.l, a.n, a.m, a.p                 ->  seq scan on  (cost=0.00..85066.80 rows=2058999 width=43)                       filter: (y = 1)   ->  hash join  (cost=185202.29..402954.54 rows=2032987 width=330)         hash cond: ((b.z)::text = (a.z)::text)         ->  cte scan on b  (cost=0.00..41179.98 rows=2058999 width=88)         ->  hash  (cost=85066.80..85066.80 rows=2058999 width=266)               ->  seq scan on  (cost=0.00..85066.80 rows=2058999 width=266)                     filter: (y = 1) 

there multiple solutions.

  • the update blocked on lock. consult pg_locks view.
  • maybe there triggers on a? reason slowdown.
  • try "explain update... " - plan different plan of plain select? maybe in 2 steps - export "b" table, , update table.
  • try dropping indexes before update.
  • create new table, drop old one, rename new table old table's name.

Comments

Popular posts from this blog

delphi - How to convert bitmaps to video? -

jasper reports - Fixed header in Excel using JasperReports -

python - ('The SQL contains 0 parameter markers, but 50 parameters were supplied', 'HY000') or TypeError: 'tuple' object is not callable -