Bobcares

Unable to DROP an Object in Amazon Redshift Cluster

by | Aug 7, 2021

We may be unable to DROP an Object in Amazon Redshift Cluster due to Insufficient permissions, Object dependency, and Lock contention.

Here, at Bobcares, we assist our customers with several AWS queries as part of our AWS Support Services.

Today, let us see how we can fix this.

 

Unable to DROP an Object in Amazon Redshift Cluster

We may fail to drop an object in the Amazon Redshift cluster due to the following reasons:

  1. Insufficient permissions
  2. Object dependency
  3. Lock contention

 

How to resolve this?

Moving ahead, let us see how our Support Techs fix these issues.

  • Insufficient permissions

First and foremost, if we don’t have the proper permissions, we fail to drop the object.

In Amazon Redshift, only the owner of the table, the schema owner, or a superuser can drop a table.

We need to create a view using the v_get_obj_priv_by_user.sql script to confirm user permissions and ownership:

CREATE OR REPLACE VIEW admin.v_get_obj_priv_by_user
AS
SELECT
  • Object dependency

If the table columns refer to by another view or table, the drop operation might fail with the following error:

Invalid operation: cannot drop table/view <object_name> because other objects depend on it

This indicates object dependencies on the target object.

To find them, we create the following three views:

  1. A view to identify the constraint dependency.
  2. A view to identify the dependent views.
  3. An object view to aggregates the previous views.

Once we create them, we use the v_object_dependency.sql script to get the dependent objects of the target object:

select * from admin.v_object_dependency where src_objectname=<target object>

After that, we drop all the related objects along with the target object by using the CASCADE parameter:

drop table <target object> cascade;
  • Lock contention

Suppose the drop command hangs or does not output anything when we perform a drop. This means the transaction holds a lock on the object.

As a result, we can’t acquire the AccessExclusiveLock on the table.

In order to identify any locks, we run:

select a.txn_owner, a.txn_db, a.xid, a.pid, a.txn_start, a.lock_mode, a.relation as table_id,nvl(trim(c."name"),d.relname) as tablename, a.granted,b.pid as blocking_pid ,datediff(s,a.txn_start,getdate())/86400||' days '||datediff(s,a.txn_start,getdate())%86400/3600||' hrs '||datediff(s,a.txn_start,getdate())%3600/60||' mins '||datediff(s,a.txn_start,getdate())%60||' secs' as txn_duration
from svv_transactions a
left join (select pid,relation,granted from pg_locks group by 1,2,3) b
on a.relation=b.relation and a.granted='f' and b.granted='t'
left join (select * from stv_tbl_perm where slice=0) c
on a.relation=c.id
left join pg_class d on a.relation=d.oid
where a.relation is not null;
And once you identify the locking transaction either COMMIT the blocking transaction or terminate the session of the blocking transaction if it is no longer necessary by :
select pg_terminate_backend(PID);

Once we identify the locks, we use PG_TERMINATE_BACKEND to release them.

[Stuck with the fix? We’d be happy to assist]

 

Conclusion

In short, we saw how our Support Techs fix the Object error in Amazon Redshift Cluster.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = "owonCMyG5nEQ0aD71QM";

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.