May I clarify - do you see this for e.g. a few seconds just after you’ve generated it and then afterwards the access token behaves completely normally? Or do you see longer term/intermittent irregular behaviour with your tokens?
To be honest I’m not sure if we’re able to use the token after this occurs because we always generate a new token when we need to access the API (instead of persisting and refreshing tokens) and we always cancel the Stuart operation at this point so that the user can initiate it again. I don’t think it’s happening very often, it’s only occurred ~15 times in the last 2 weeks, but then again we are not making many requests at the moment.
This issue is likely the result of requesting a new token each time which is not the recommended approach. When requesting a token and then requesting a job right after our system may experience some lag in processing the new token before the new job request comes in. Is it feasible for you to use the persisting and refreshing approach? Depending on the language that you’re using, we have some client libraries that can handle this token refresh for you.
If you’re experiencing this issue even when already following best practices to cache and only request a new token when it’s expired, then indeed a mitigating step can be to wait a few seconds before calling the API with a renewed token.
Nevertheless, we will investigate this issue internally as it’s not expected behaviour.
Yes I can confirm, the very first time we called the get token, then used it right away, gave an invalid token, which puzzled me for a bit. It could create a loop of trying to get a new token, depending on how you implement the refresh. For now the cached one is working fine, but might get the same issue again when it expires (a wait after getting the token response is a quick workaround for me, but come to think of it its probably better to have this wait to stop a bad token racing the system anyway).