AI detectors and academic papers

Not sure if this is the right place to ask this, so I apologize incase it isnt. I’d appreciate if some could guide me towards a subreddit where I can ask this question, however, I’ve recently finished a review paper on a molecule and its application in certain areas, its around 27k words and 93 pages long. I used chatGPT to revise certain sentences where I didn’t like my own, or had it suggest alternative ways to say something because I had repeated a certain word a bit too much in a paragraph. English isnt my first language so using LLMs has become a bit second nature to me as it helps me to check my grammar, fix sentence flow and sometimes suggest better transition between two different paragraphs.

Over time, I noticed grammarly is notifying me about usage of AI in my paper, so I used quilbot to check the overall percentage, which it gave at 8%. Now im quite worried if this could affect my submission process, specially since my supervisor is telling me it should be 0%.

I know the general consensus is that AI detectors are unreliable, but i think everybody is using them now in academic settings to weed out AI written crap, but 8% is such a low number and yet my supervisor is highly insistent that it should be 0. I would trust her on this object if she wasn’t borderline technologically inept.

Anyway, sorry for my wall of text. I guess my question is, is 8% too much? should i try to make it lower? and if so, how the hell do i do that? specially since AI seems to love some words that I tend to use alot, like “remarkable result” or “the authors delved deeper into this issue”