ScienceGuardians

ScienceGuardians

Did You Know?

ScienceGuardians holds all parties accountable

Exploring the acceptance of e-learning in health professions education in Iran based on the technology acceptance model (TAM)

Authors: Haniye Mastour,Razieh Yousefi,Shabnam Niroumand
Journal: Scientific Reports
Publisher: Springer Science and Business Media LLC
Publish date: 2025-3-10
ISSN: 2045-2322 DOI: 10.1038/s41598-025-90742-5
View on Publisher's Website
Up
0
Down
::

Table 2 Demographic info; You’ve got faculty age listed as 45.11±10.4145.11±10.41 years. That’s a huge SD — so you have faculty from mid-30s to mid-50s basically. But teaching experience is 13.23±9.1413.23±9.14 years.

Hypothesis table (Table 9);  For undergrads, PEU → PU has β=0.111,p=0.031, significant but tiny effect. Yet you claim PEU strongly influences PU in the discussion.
Isn’t that overstating? And for postgrads, PU → ATU is not significant (p=0.225), but you later say PU didn’t impact their attitudes, fine, but then why is PU → IU significant (p=0.031)? That’s contradictory in TAM logic.

Validity of “Individual Factors” for faculty; AVE for IF in faculty data is below 0.5 (0.464), yet you still use it in the model.
Isn’t that below the acceptable threshold for convergent validity? Why wasn’t it addressed or removed?

Sampling bias; You used convenience sampling during COVID, which you admit may overrepresent motivated users.
But then you generalize findings to “health professions education in Iran.” Isn’t that a stretch? Especially since it’s only one university.

Organizational factors indirectly affecting acceptance; You found OF didn’t directly affect PEU/PU, only through individual factors. But in Table 9, OF → IF is significant for all groups, yet IF → PEU is only significant for faculty.
So for students, organizational factors don’t really flow to ease of use at all. Yet in discussion you say OF indirectly affects e-learning use for everyone. That’s not fully supported by your stats.

Cronbach’s alpha for Social Factors in faculty = 0.589 (Table 3), which is below 0.7.
Why is that considered acceptable? Low reliability for a main construct weakens the model.
Page 12 has a bizarre block of numbers; Looks like a corrupted sentence or misplaced data. What’s that about? Almost seems like a copy-paste error from some analysis output.

You say innovation characteristics positively influence PU and PEU; but for faculty, IC → PEU is 0.528 and IC → PU is 0.357.
For undergrads, it’s much stronger (0.791 and 0.484). That’s a big group difference not deeply discussed. Why would innovation features matter less for faculty?

  • You must be logged in to reply to this topic.