Araştırma Makalesi
BibTex RIS Kaynak Göster

WHO KNOWS? ALIEN INTELLIGENCE, EPISTEMIC ALIENATION, AND THE TRANSFORMATION OF KNOWLEDGE

Yıl 2025, Sayı: 2, 254 - 286, 22.10.2025

Öz

This study aims to evaluate the role of artificial intelligence (AI) systems in knowledge production not merely as a technical transformation, but as a fundamental epistemological rupture. Today, applications such as large language models and algorithmic decision-support systems do not simply imitate human cognition; they increasingly replace it, constructing an entirely new regime of knowledge. This shift alters not only the content of knowledge but also its modes of production, control mechanisms, and authority structures. The paper analyzes this transformation through the lens of “epistemic alienation,” a concept that problematizes the transition from human-centered knowledge production to post-human knowledge forms generated by opaque and unaccountable algorithmic processes. Marx’s theory of alienation is used not only as a historical analogy but as a conceptual framework to interrogate the human condition in the face of its own cognitive creations. In contrast to critical views, the optimistic narrative proposed by figures such as Marc Andreessen is examined, which frames AI as a liberating and empowering technology. Harari’s notion of “alien intelligence” is employed to highlight the ontological and epistemic departure of AI systems from human-centered cognition. By bringing together perspectives from the philosophy of knowledge, sociology of science, and critical technology studies, the article underscores the limitations of classical epistemological frameworks in addressing this novel phenomenon. Ultimately, the study argues that AI must be understood not merely as a tool or instrument, but as an emerging epistemological agent reshaping the very foundations of knowledge regimes. This reframing urges a deeper philosophical engagement with AI’s role in epistemic authority, meaning-making, and the evolving relationship between human and machine intelligences.

Kaynakça

  • Amayreh, M., & Amayreh, A. (2025). Artificial intelligence in higher education: A classification of academic users and implications for epistemic practices. Journal of Educational Technology and Cognitive Learning, 12(1), 33–52. http://doi.org/10.1016/j.jetcl.2025.01.004
  • Andreessen, M. (2023, June 6). Why AI will save the world. Andreessen Horowitz. https://a16z.com/ai-will-save-the-world/
  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., ... & Rahwan, I. (2018). The Moral Machine experiment. Nature, 563(7729), 59–64. http://doi.org/10.1038/s41586-018-0637-6
  • Baiburin, R., Meissner, N., & Talbot, L. (2024). Artificial intelligence as a cognitive collaborator: Rethinking research workflows in the age of language models. AI & Society, 39(2), 417–438. http://doi.org/10.1007/s00146-023-01624-4
  • Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260. http://doi.org/10.1086/292745
  • Carboni, C., Wehrens, R., van der Veen, R., & de Bont, A. (2023). Eye for an AI: More-than-seeing, fauxtomation, and the enactment of uncertain data in digital pathology. Social Studies of Science, 53(5), 712–737. http://doi.org/10.1177/03063127231167589
  • Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19. http://doi.org/10.1093/analys/58.1.7
  • Clark, A. (2015). Radical predictive processing. The Southern Journal of Philosophy, 53(S1), 3–27. http://doi.org/10.1111/sjp.12120
  • Christie’s. (2018). Is artificial intelligence set to become art’s next medium? https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx
  • Dyer-Witheford, N., Kjosen, A. M., & Steinhoff, J. (2022). Yapay zekâ ve kapitalizmin geleceği: İnsandışı bir güç (B. Cezar, Çev.). İletişim Yayınları.
  • Frey, C. B., & Osborne, M. A. (2013). The future of employment: How susceptible are jobs to computerisation? Oxford Martin School, University of Oxford. https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
  • Gao, C., Liu, Z., Li, F., & Wang, J. (2023). Academic co-authorship with large language models: Opportunities and ethical challenges. Journal of Scholarly Publishing, 54(3), 173–189. http://doi.org/10.3138/jsp-2023-0020
  • George, D., Lehrach, W., Kansky, K., Lázaro-Gredilla, M., Laan, C., Marthi, B., ... & Lavin, A. (2017). A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs. Science, 358(6368), eaag2612. http://doi.org/10.1126/science.aag2612
  • Goldberg, S. (2011). Relying on others: An essay in epistemology. Oxford University Press.
  • Goldberg, S. (2020). Epistemic dependence in contemporary epistemology. Synthese, 197(7), 2781–2803. http://doi.org/10.1007/s11229-018-01981-8
  • Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences, 14(8), 357–364. http://doi.org/10.1016/j.tics.2010.05.004
  • Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018). When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729–754. http://doi.org/10.1613/jair.1.11222
  • Hanson, N. R. (1958). Patterns of discovery: An inquiry into the conceptual foundations of science. Cambridge University Press.
  • Hardwig, J. (1991). The role of trust in knowledge. The Journal of Philosophy, 88(12), 693–708. http://doi.org/10.2307/2027007
  • Harari, Y. N. (2024). Nexus: Taş Devri’nden Yapay Zekâya Bilgi Ağlarının Kısa Tarihi (Ç. Şentuğ, Çev.). Kolektif Kitap.
  • He, X., & Burger-Helmchen, T. (2024). Evolving knowledge management: Artificial intelligence and the dynamics of social interactions. Journal of Knowledge Management, 28(3), 456–472. http://doi.org/10.1108/JKM-03-2024-0123
  • Humphreys, P. (2009). The philosophical novelty of computer simulation. Synthese, 169(3), 615–626. http://doi.org/10.1007/s11229-008-9435-2
  • Hutchins, E. (1995). Cognition in the wild. MIT Press.
  • Jarrahi, M. H., Askay, D., Eshraghi, A., & Smith, P. (2023). Artificial intelligence and knowledge management: A partnership between human and AI. Business Horizons, 66(1), 87–99. http://doi.org/10.1016/j.bushor.2022.03.002
  • Jones, K. (2012). Trustworthiness. Ethics, 123(1), 61–85. http://doi.org/10.1086/667837
  • Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge University Press.
  • Knorr Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Harvard University Press.
  • Korteling, J. E., Brouwer, A. M., Toet, A., & van Erp, J. B. F. (2021). Human–technology teaming: Using artificial intelligence to enhance human decision making. Human Factors, 63(1), 5–25. http://doi.org/10.1177/0018720819874666
  • Koskinen, I. (2023). We have no satisfactory social epistemology of AI-based science. Social Epistemology, 38(4), 458–475. http://doi.org/10.1080/02691728.2023.2286253
  • Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press. Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, e253. http://doi.org/10.1017/S0140525X16001837
  • Latour, B., & Woolgar, S. (1979). Laboratory life: The construction of scientific facts. Sage Publications.
  • Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1(1), 4–11. http://doi.org/10.1109/THFE2.1960.4503259
  • McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12–14. http://doi.org/10.1609/aimag.v27i4.1904
  • Marx, K. (2013). 1844 el yazmaları: Ekonomi-politiğin eleştirisine katkı (M. Belge, Çev.; 8. bs.). Birikim Yayınları.
  • Metz, C. (2023, March 15). OpenAI’s GPT-4 passes bar exam and solves logic puzzles. The New York Times. https://www.nytimes.com/2023/03/15/technology/openai-gpt4-chatbot.html
  • Miller, B., & Freiman, C. (2020). Trust in science. In J. Lackey (Ed.), Applied epistemology (pp. 111–133). Oxford University Press.
  • Morrison, C. (2018, August 20). Bank of England economist warns thousands of UK jobs at risk from robots and AI. The Independent. https://www.independent.co.uk/news/business/news/uk-job-loss-risk-ai-robots-artificial-intelligence-technology-bank-of-england-andy-haldane-a8498901.html
  • Nature Editorial. (2023). Tools not authors: AI in research publication. Nature, 613(7944), 612. http://doi.org/10.1038/d41586-023-00191-1
  • Ng, A. (2017, October). Artificial intelligence is the new electricity. [Talk]. Stanford University. https://www.youtube.com/watch?v=21EiKfQYZXc
  • Nguyen, C. T. (2022). Trust as an unquestioning attitude. Oxford Studies in Epistemology, 7, 1–28.
  • Nickel, P. J. (2013). Trust and autonomous systems. In M. Decker et al. (Eds.), Ethics for robots (pp. 31–38). AKA Verlag.
  • OpenAI. (2020). GPT-3: Language models are few-shot learners. https://arxiv.org/abs/2005.14165
  • OpenAI. (2024). GPT-4o technical report. https://openai.com/research/gpt-4o
  • Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., ... & Ng, A. Y. (2017). CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint, arXiv:1711.05225. https://arxiv.org/abs/1711.05225
  • Real, E., Aggarwal, A., Huang, Y., & Le, Q. V. (2020). AutoML-Zero: Evolving machine learning algorithms from scratch. Nature, 586(7839), 113–117. http://doi.org/10.1038/s41586-020-03062-2
  • Taylor, A. (2018, August). The automation charade. Logic Magazine(5). https://logicmag.io/failure/the-automation-charade/
  • Taylor, K. A. (2022). Yapay zekânın geçmişi ve geleceği. In D. Acemoğlu, D. Johnson, & E. Pascual (Eds.), Yapay zekâyı yeniden tasarlamak: Otomasyon çağında iş, demokrasi ve adalet (H. Dölkeleş, Çev., pp. 117–137). Efil Yayınevi.
  • Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285. http://doi.org/10.1126/science.1192788
  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, LIX(236), 433–460. http://doi.org/10.1093/mind/LIX.236.433
  • Wagenknecht, S. (2014). Opaque and translucent epistemic dependence in collaborative scientific practice. Episteme, 11(4), 475–492. http://doi.org/10.1017/epi.2014.25
  • Wagenknecht, S. (2015). A social epistemology of research groups. Palgrave Macmillan.
  • Wilholt, T. (2013). Epistemic trust in science. The British Journal for the Philosophy of Science, 64(2), 233–253. http://doi.org/10.1093/bjps/axs007

Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü

Yıl 2025, Sayı: 2, 254 - 286, 22.10.2025

Öz

Bu çalışma, yapay zekâ sistemlerinin bilgi üretimindeki rolünü yalnızca teknik bir dönüşüm olarak değil, aynı zamanda epistemolojik bir kırılma olarak değerlendirmeyi amaçlamaktadır. Günümüzde büyük dil modelleri ve algoritmik karar destek sistemleri gibi yapay zekâ uygulamaları, yalnızca insan bilişini taklit etmekle kalmayıp, onun yerine geçerek yeni bir bilgi rejimi inşa etmektedir. Bu süreç, bilginin içeriği kadar üretim tarzını, denetim mekanizmalarını ve otorite ilişkilerini de dönüştürmektedir. Çalışma, bu dönüşümü "epistemolojik yabancılaşma" kavramı çerçevesinde analiz etmekte; bilginin insan merkezli üretiminden, açıklanamaz ve denetlenemez algoritmik yapılar eliyle üretilen post-insani bilgi formlarına geçişi sorgulamaktadır. Marx’ın yabancılaşma kuramı, bu bağlamda yalnızca tarihsel bir benzetme olarak değil, insanın kendi üretimi karşısındaki konumunu anlamak için kavramsal bir zemin olarak kullanılmaktadır. Ayrıca Marc Andreessen gibi teknolojik iyimserlik temsilcilerinin görüşleriyle, eleştirel epistemolojik yaklaşımlar karşılaştırmalı olarak ele alınmaktadır. Harari’nin önerdiği “alien intelligence” (yabancı zekâ) kavramı ise, yapay zekânın insani bilişten sapmasını vurgulayan alternatif bir tanımlama olarak kullanılmıştır. Çalışma, yapay zekânın bilginin mahiyeti, üretimi ve meşruiyeti üzerindeki etkilerini sorgularken, klasik bilgi kuramlarının bu yeni fenomen karşısındaki yetersizliklerine de dikkat çekmektedir. Sonuç olarak, yapay zekâyı yalnızca teknik bir araç olarak değil, bilgi rejimlerini yeniden şekillendiren bir epistemolojik özne olarak konumlandırmak gerektiği savunulmaktadır.

Etik Beyan

-

Destekleyen Kurum

-

Teşekkür

-

Kaynakça

  • Amayreh, M., & Amayreh, A. (2025). Artificial intelligence in higher education: A classification of academic users and implications for epistemic practices. Journal of Educational Technology and Cognitive Learning, 12(1), 33–52. http://doi.org/10.1016/j.jetcl.2025.01.004
  • Andreessen, M. (2023, June 6). Why AI will save the world. Andreessen Horowitz. https://a16z.com/ai-will-save-the-world/
  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., ... & Rahwan, I. (2018). The Moral Machine experiment. Nature, 563(7729), 59–64. http://doi.org/10.1038/s41586-018-0637-6
  • Baiburin, R., Meissner, N., & Talbot, L. (2024). Artificial intelligence as a cognitive collaborator: Rethinking research workflows in the age of language models. AI & Society, 39(2), 417–438. http://doi.org/10.1007/s00146-023-01624-4
  • Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260. http://doi.org/10.1086/292745
  • Carboni, C., Wehrens, R., van der Veen, R., & de Bont, A. (2023). Eye for an AI: More-than-seeing, fauxtomation, and the enactment of uncertain data in digital pathology. Social Studies of Science, 53(5), 712–737. http://doi.org/10.1177/03063127231167589
  • Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19. http://doi.org/10.1093/analys/58.1.7
  • Clark, A. (2015). Radical predictive processing. The Southern Journal of Philosophy, 53(S1), 3–27. http://doi.org/10.1111/sjp.12120
  • Christie’s. (2018). Is artificial intelligence set to become art’s next medium? https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx
  • Dyer-Witheford, N., Kjosen, A. M., & Steinhoff, J. (2022). Yapay zekâ ve kapitalizmin geleceği: İnsandışı bir güç (B. Cezar, Çev.). İletişim Yayınları.
  • Frey, C. B., & Osborne, M. A. (2013). The future of employment: How susceptible are jobs to computerisation? Oxford Martin School, University of Oxford. https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
  • Gao, C., Liu, Z., Li, F., & Wang, J. (2023). Academic co-authorship with large language models: Opportunities and ethical challenges. Journal of Scholarly Publishing, 54(3), 173–189. http://doi.org/10.3138/jsp-2023-0020
  • George, D., Lehrach, W., Kansky, K., Lázaro-Gredilla, M., Laan, C., Marthi, B., ... & Lavin, A. (2017). A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs. Science, 358(6368), eaag2612. http://doi.org/10.1126/science.aag2612
  • Goldberg, S. (2011). Relying on others: An essay in epistemology. Oxford University Press.
  • Goldberg, S. (2020). Epistemic dependence in contemporary epistemology. Synthese, 197(7), 2781–2803. http://doi.org/10.1007/s11229-018-01981-8
  • Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences, 14(8), 357–364. http://doi.org/10.1016/j.tics.2010.05.004
  • Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018). When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729–754. http://doi.org/10.1613/jair.1.11222
  • Hanson, N. R. (1958). Patterns of discovery: An inquiry into the conceptual foundations of science. Cambridge University Press.
  • Hardwig, J. (1991). The role of trust in knowledge. The Journal of Philosophy, 88(12), 693–708. http://doi.org/10.2307/2027007
  • Harari, Y. N. (2024). Nexus: Taş Devri’nden Yapay Zekâya Bilgi Ağlarının Kısa Tarihi (Ç. Şentuğ, Çev.). Kolektif Kitap.
  • He, X., & Burger-Helmchen, T. (2024). Evolving knowledge management: Artificial intelligence and the dynamics of social interactions. Journal of Knowledge Management, 28(3), 456–472. http://doi.org/10.1108/JKM-03-2024-0123
  • Humphreys, P. (2009). The philosophical novelty of computer simulation. Synthese, 169(3), 615–626. http://doi.org/10.1007/s11229-008-9435-2
  • Hutchins, E. (1995). Cognition in the wild. MIT Press.
  • Jarrahi, M. H., Askay, D., Eshraghi, A., & Smith, P. (2023). Artificial intelligence and knowledge management: A partnership between human and AI. Business Horizons, 66(1), 87–99. http://doi.org/10.1016/j.bushor.2022.03.002
  • Jones, K. (2012). Trustworthiness. Ethics, 123(1), 61–85. http://doi.org/10.1086/667837
  • Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge University Press.
  • Knorr Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Harvard University Press.
  • Korteling, J. E., Brouwer, A. M., Toet, A., & van Erp, J. B. F. (2021). Human–technology teaming: Using artificial intelligence to enhance human decision making. Human Factors, 63(1), 5–25. http://doi.org/10.1177/0018720819874666
  • Koskinen, I. (2023). We have no satisfactory social epistemology of AI-based science. Social Epistemology, 38(4), 458–475. http://doi.org/10.1080/02691728.2023.2286253
  • Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press. Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, e253. http://doi.org/10.1017/S0140525X16001837
  • Latour, B., & Woolgar, S. (1979). Laboratory life: The construction of scientific facts. Sage Publications.
  • Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1(1), 4–11. http://doi.org/10.1109/THFE2.1960.4503259
  • McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12–14. http://doi.org/10.1609/aimag.v27i4.1904
  • Marx, K. (2013). 1844 el yazmaları: Ekonomi-politiğin eleştirisine katkı (M. Belge, Çev.; 8. bs.). Birikim Yayınları.
  • Metz, C. (2023, March 15). OpenAI’s GPT-4 passes bar exam and solves logic puzzles. The New York Times. https://www.nytimes.com/2023/03/15/technology/openai-gpt4-chatbot.html
  • Miller, B., & Freiman, C. (2020). Trust in science. In J. Lackey (Ed.), Applied epistemology (pp. 111–133). Oxford University Press.
  • Morrison, C. (2018, August 20). Bank of England economist warns thousands of UK jobs at risk from robots and AI. The Independent. https://www.independent.co.uk/news/business/news/uk-job-loss-risk-ai-robots-artificial-intelligence-technology-bank-of-england-andy-haldane-a8498901.html
  • Nature Editorial. (2023). Tools not authors: AI in research publication. Nature, 613(7944), 612. http://doi.org/10.1038/d41586-023-00191-1
  • Ng, A. (2017, October). Artificial intelligence is the new electricity. [Talk]. Stanford University. https://www.youtube.com/watch?v=21EiKfQYZXc
  • Nguyen, C. T. (2022). Trust as an unquestioning attitude. Oxford Studies in Epistemology, 7, 1–28.
  • Nickel, P. J. (2013). Trust and autonomous systems. In M. Decker et al. (Eds.), Ethics for robots (pp. 31–38). AKA Verlag.
  • OpenAI. (2020). GPT-3: Language models are few-shot learners. https://arxiv.org/abs/2005.14165
  • OpenAI. (2024). GPT-4o technical report. https://openai.com/research/gpt-4o
  • Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., ... & Ng, A. Y. (2017). CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint, arXiv:1711.05225. https://arxiv.org/abs/1711.05225
  • Real, E., Aggarwal, A., Huang, Y., & Le, Q. V. (2020). AutoML-Zero: Evolving machine learning algorithms from scratch. Nature, 586(7839), 113–117. http://doi.org/10.1038/s41586-020-03062-2
  • Taylor, A. (2018, August). The automation charade. Logic Magazine(5). https://logicmag.io/failure/the-automation-charade/
  • Taylor, K. A. (2022). Yapay zekânın geçmişi ve geleceği. In D. Acemoğlu, D. Johnson, & E. Pascual (Eds.), Yapay zekâyı yeniden tasarlamak: Otomasyon çağında iş, demokrasi ve adalet (H. Dölkeleş, Çev., pp. 117–137). Efil Yayınevi.
  • Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285. http://doi.org/10.1126/science.1192788
  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, LIX(236), 433–460. http://doi.org/10.1093/mind/LIX.236.433
  • Wagenknecht, S. (2014). Opaque and translucent epistemic dependence in collaborative scientific practice. Episteme, 11(4), 475–492. http://doi.org/10.1017/epi.2014.25
  • Wagenknecht, S. (2015). A social epistemology of research groups. Palgrave Macmillan.
  • Wilholt, T. (2013). Epistemic trust in science. The British Journal for the Philosophy of Science, 64(2), 233–253. http://doi.org/10.1093/bjps/axs007
Toplam 53 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Bilgi Felsefesi, Bilişim Felsefesi, Teknoloji Felsefesi
Bölüm Makaleler
Yazarlar

Ömer Faik Anlı 0000-0002-5621-5145

Yayımlanma Tarihi 22 Ekim 2025
Gönderilme Tarihi 17 Haziran 2025
Kabul Tarihi 21 Ekim 2025
Yayımlandığı Sayı Yıl 2025 Sayı: 2

Kaynak Göster

APA Anlı, Ö. F. (2025). Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü. Kilikya Felsefe Dergisi(2), 254-286.
AMA Anlı ÖF. Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü. KFD. Ekim 2025;(2):254-286.
Chicago Anlı, Ömer Faik. “Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü”. Kilikya Felsefe Dergisi, sy. 2 (Ekim 2025): 254-86.
EndNote Anlı ÖF (01 Ekim 2025) Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü. Kilikya Felsefe Dergisi 2 254–286.
IEEE Ö. F. Anlı, “Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü”, KFD, sy. 2, ss. 254–286, Ekim2025.
ISNAD Anlı, Ömer Faik. “Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü”. Kilikya Felsefe Dergisi 2 (Ekim2025), 254-286.
JAMA Anlı ÖF. Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü. KFD. 2025;:254–286.
MLA Anlı, Ömer Faik. “Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü”. Kilikya Felsefe Dergisi, sy. 2, 2025, ss. 254-86.
Vancouver Anlı ÖF. Kim Biliyor?: Yabancı Zekâ, Epistemolojik Yabancılaşma ve Bilginin Dönüşümü. KFD. 2025(2):254-86.