Guarding the gate: using honeywords to enhance authentication security
MetadataShow full item record
A honeyword (false password) can be defined as a duplicate password (rearranged) resembling the same characteristics of the original password. It is very challenging for any cyberpunk to distinguish between a real password and honeyword (containing PI). Using HGT’s (honeyword generation technique), these honeywords are generated in lump sum and the hashed honeywords are placed in an organization database with triggers to identify breach before it’s too late. In accordance with the previous research, the concept of HGT’s might fail if the generated honeywords does not contain the personal information of the user, making it easy for the attacker to perform targeted attack. It is a good practice to include the chucks containing PI or part of the original password of that particular user in generated honeywords to make it look natural. In order to generate such honeywords with chunks, the concept of prompt engineering in LLM (Large Learning Models) is used. In this report, we tried to improve the existing prompt, making it easy for the LLM to get deep understanding and to produce better throughput. In addition to that, we compared the base GPT Learning model (existing) with the newly upgraded GPT models like GPT-3.5-turbo and GPT-4. Considering the ‘strength of password‘ as a base factor, we came up with results and statements stating which model outperformed the others.