Engineering design requires humans to make complex, multi-objective decisions involving trade-offs where it is challenging to identify the best solution. AI-embedded computational support tools are increasingly used to aid in such scenarios, enhancing the design decision-making process. However, over- or under-reliance on imperfect “blackbox” models may prevent optimal outcomes. To investigate AI-assisted decision-making in engineering design, two complementary experiments (N = 90) were conducted. Participants chose between pairs of aircraft jet engine brackets and were tasked with selecting the better design based on two (Experiment 1) or three (Experiment 2) competing objectives. Participants received simulated AI suggestions, which correctly suggested a better design, incorrectly suggested a worse design, or arbitrarily suggested an approximately equivalent design. At times, these suggestions were accompanied by an example-based explanation. Results demonstrate that participants follow suggestions less than expected when the model can objectively determine the better-performing alternative, often underutilizing the model’s advice to their detriment. When the “better” choice is uncertain, the tendency to follow an arbitrary suggestion differs, with overutilization occurring only in the bi-objective case. There is no evidence that providing an explanation of the model’s suggestion impacts decision-making. The results provide valuable insights into how engineering designers’ multi-objective decisions may be affected – positively, negatively, or not at all – by computational tools meant to assist them.