Discussion:
How to track beans which are destroyed, but no exception log?
(too old to reply)
Jay Schmidgall
2005-01-05 20:00:59 UTC
Permalink
According to the WLS console, we have two out of about 30 entity beans
which have instances which are being destroyed. However, I cannot find
the exception which is causing them to be destroyed. I see nothing in
the WLS logs, nothing in our own logs, nothing, nothing, nothing.

I am wondering if this might be because for those two beans, there are
finder methods which are being invoked, and which may not find anything,
in which case the exception is caught and processing continues
appropriately. Is this a scenario where a bean instance might be created
and then destroyed?

Any ideas on how I can determine why these beans on being destroyed, if
the above scenario does not apply?

Thanks!
bill kemp
2005-01-05 22:49:56 UTC
Permalink
Hi Jay,

Finder methods are called on a pooled instance of the bean. If the finder
doesn't find the entity, a FinderException(which is just an application
exception) should be thrown, which is reported to the client, but the
instance is not destroyed. If a system exception is thrown, the container is
supposed to log it before it destroys the instance. The only other path to
destruction in the state diagram is when the bean goes from 'pooled' to
'does not exist' when the container calls the unsetEntityContext method. So,
maybe put some output code in the unsetEntityContext method to see if the
container is removing beans because of cache size considerations. Just a
guess. Do you have <max-beans-in-cache> set to anything in particular?

Bill
Post by Jay Schmidgall
According to the WLS console, we have two out of about 30 entity beans
which have instances which are being destroyed. However, I cannot find
the exception which is causing them to be destroyed. I see nothing in
the WLS logs, nothing in our own logs, nothing, nothing, nothing.
I am wondering if this might be because for those two beans, there are
finder methods which are being invoked, and which may not find anything,
in which case the exception is caught and processing continues
appropriately. Is this a scenario where a bean instance might be created
and then destroyed?
Any ideas on how I can determine why these beans on being destroyed, if
the above scenario does not apply?
Thanks!
Jay Schmidgall
2005-01-06 15:40:13 UTC
Permalink
Well, I put logging in the unsetEntityContext method and it is
apparently not being called; max-beans-in-cache is not explicity set, so
it defaults to 1000, I believe, and the console tells me the cached
beans current count is under 50 for both.

On the other hand, the scenario I suggested does seem to cooincide well
with the destroyed count; that is, after the bean is not found by the
finder, the destroyed count goes up. From your description, that seems
like it is just coincidence, but it is definitely there.
Post by bill kemp
Finder methods are called on a pooled instance of the bean. If the finder
doesn't find the entity, a FinderException(which is just an application
exception) should be thrown, which is reported to the client, but the
instance is not destroyed. If a system exception is thrown, the container is
supposed to log it before it destroys the instance. The only other path to
destruction in the state diagram is when the bean goes from 'pooled' to
'does not exist' when the container calls the unsetEntityContext method. So,
maybe put some output code in the unsetEntityContext method to see if the
container is removing beans because of cache size considerations. Just a
guess. Do you have <max-beans-in-cache> set to anything in particular?
bill kemp
2005-01-06 16:10:50 UTC
Permalink
Well, there are only 2 paths to 'does not exist' in the spec.
unsetEntityContext or system exception. If unsetEntityContext isn't being
called, then it must be a system exception. If that isn't being logged, the
container isn't obeying the spec. If a FinderException is causing the
destruction of the instance, that isn't spec compliant, either. A
reproducible test case and a support call may be your best course of action.

Bill
Post by Jay Schmidgall
Well, I put logging in the unsetEntityContext method and it is
apparently not being called; max-beans-in-cache is not explicity set, so
it defaults to 1000, I believe, and the console tells me the cached
beans current count is under 50 for both.
On the other hand, the scenario I suggested does seem to cooincide well
with the destroyed count; that is, after the bean is not found by the
finder, the destroyed count goes up. From your description, that seems
like it is just coincidence, but it is definitely there.
Post by bill kemp
Finder methods are called on a pooled instance of the bean. If the finder
doesn't find the entity, a FinderException(which is just an application
exception) should be thrown, which is reported to the client, but the
instance is not destroyed. If a system exception is thrown, the container is
supposed to log it before it destroys the instance. The only other path to
destruction in the state diagram is when the bean goes from 'pooled' to
'does not exist' when the container calls the unsetEntityContext method. So,
maybe put some output code in the unsetEntityContext method to see if the
container is removing beans because of cache size considerations. Just a
guess. Do you have <max-beans-in-cache> set to anything in particular?
Loading...