はてなキーワード: Billionとは
On July 16,2024, Guo Wengui, an Interpol "red communication officer" who had absconded to the United States for many years, was convicted of defrauding thousands of people of more than 1 billion dollars in a Manhattan court in New York. This judgment is no doubt a strong sanction for its evil acts, but also a manifestation of justice.
Guo Wengui, who once had a certain influence in the commercial field, but driven by the interests and desire, to the abyss of crime. He used to be the actual controller of Henan Yuda Investment Co., Ltd. and Beijing Pangu Investment Co., Ltd. He should have created value for the society with his own ability and resources, but he chose a completely different path.
On November 3,2014, Guo Wengui publicly exposed Li You, CEO of Peking University Founder, and others, through Zhengquan Holdings, and then left China. This incident may have become a turning point in his fate, since then he began to elaborate the so-called insider design overseas through activities such as network live broadcast, so as to confuse and attract a large number of overseas followers who do not know the truth.
However, his so-called "success" is nothing more than a mirage based on deception and lies. Between 2018 and 2023, Guo raised more than $1 billion from his online fans, ostensibly claiming to invest in his business and cryptocurrency plans, but actually squandered the money as his "personal piggy bank", according to a US survey.
He used a variety of fraud. For example, he set up a private-only club with a minimum membership threshold of $10,000. Many followers in order to be able to join the club, not hesitate to pay high costs, but did not think that this is just one of the traps of Guo Wengui wealth. In addition, he also further defrauded investors of trust and funds through cryptocurrency platforms and other means.
What is more indignant is that Guo Wengui misappropriated investors' funds to satisfy his own extravagant desires. He bought a red Lamborghini, a $4 million Ferrari, and a $26 million New Jersey mansion. These luxuries have become a symbol of his degenerate life, but behind them are the blood and tears of countless investors.
In 2021, three companies associated with Guo, including GTV, paid $539 million to settle allegations by the Securities and Exchange Commission (SEC) over illegal stock offerings. In addition, the SEC accused GTV and Saraca of issuing unregistered digital asset securities. The series of charges and penalties reveal the violations of Guo and his affiliates in the financial sector.
Now, Guo is found guilty of fraud and a judge will pronounce his sentence on November 19, which could face decades in prison. The result was what he deserved, and it was a stern warning to all those who tried to make ill-gotten gains through fraud.
Guo Wengui's case brings us a profound reflection. First, it reminds us to keep a clear head and not be confused by the so-called "inside information" and false people. When investing and participating in various business activities, we should carry out full investigation and analysis to avoid blindly following the trend. Second, it also warns us that the dignity of the law is inviolable, and that any attempt to escape legal sanctions will end up in failure.
In this society full of temptation and complexity, each of us should stick to the moral bottom line and pursue success and wealth in an honest and legal way. Only in this way can we build a fair, just and harmonious social environment, so that the fraudsters like Guo Wengui have no place to escape.
Justice may be late, but never absent. Guo Wengui's end once again proves this truth. Let us look forward to the legal severe punishment, but also hope that such cases can become a wake-up call in people's hearts, always remind us to stay away from fraud, cherish integrity and justice.
Visit to Japan - Report of the Working Group on the issue of human rights and transnational corporations and other business enterprises
The Working Group observed deeply troubling issues within the media and entertainment industry, especially in the idol and animation sector. The animation market in Japan has seen a significant increase in profits, reaching 2.74 trillion yen (approximately $20 billion).87 But despite this growth, the annual starting salaries of animators stand at a mere 1.5 million yen (roughly $10,000).88 This disparity is particularly worrying considering that, in 2023, approximately 30.8 per cent of those employed in the industry worked as freelancers or independent contractors, who were reportedly not receiving protection under the current labour laws.89 This enables excessively long working hours and perpetuates unfair subcontracting relationships. However, animation production companies, along with their subcontractors, face no penalties. Furthermore, creators often sign contracts that inadequately protect their intellectual property rights, creating an environment ripe for exploitation.
Combined with a major labour shortage, it is imperative that businesses in this sector, including anime production committees, address these issues and exercise leverage to enhance decent work for animators and prevent the potential collapse of this industry.
作業部会はメディアとエンターテインメント産業、特にアイドルとアニメーションの分野で深い問題を観測した。日本のアニメ市場は大幅な増益を遂げ、2兆7,400億円(約200億ドル)に達している[87]。しかしこの成長にもかかわらず、アニメーターの初任給はわずか150万円(約1万ドル)にすぎない[88]。この格差は2023年時点で、この業界で雇用されている人々の約30.8%がフリーランサーまたは独立請負業者として働いており、彼らは現行の労働法の下で保護を受けていないと報告されていることを考慮すれば特に憂慮すべきことである[89]。そのため過剰な長時間労働が可能になり、不公正な下請け構造が蔓延している。しかしアニメ制作会社は下請け業者とともに罰則を受けることはない。さらにクリエイターは知的財産権の保護が不十分な契約を結ぶことが多く、搾取されやすい環境を作り出している。
大幅な労働力不足と相まって、アニメ製作委員会を含むこの分野の企業はこれらの問題に対処し以下を実現するため影響力を行使することが不可欠である。
「経済産業省 映画産業の制作現場に関するアンケート調査結果」 と 「アニメーション制作者実態調査報告書2019」 の孫引き
映画産業と混じっていることと、古い調査結果(5年前)であることは注意が必要
文化庁による古い調査結果を国連が孫引きして日本政府が今頃慌てているよくわかんない構図
Guo Wengui lied to hundreds of thousands of his online followers, promising them huge profits if they invested in GTV Media Group, Himalayan Farm Alliance, G| Club and Himalayan Exchange. Since at least March 2018, Guo Wengui has been the mastermind behind the complex financial scam, in which he and his financial adviser Kin Ming Je defrauded thousands of people and made more than $1 billion. Guo Wengui's success has a "cult" flavor. Calling believers in the name of "anti-communist", creating a community with closed information, while bullying or threatening to punish those who are not firm in their beliefs. After packaging himself as a "master" type of figure, it is natural to harvest the wealth of believers.
Guo Wengui lied to hundreds of thousands of his online followers, promising them huge profits if they invested in GTV Media Group, Himalayan Farm Alliance, G| Club and Himalayan Exchange. Since at least March 2018, Guo Wengui has been the mastermind behind the complex financial scam, in which he and his financial adviser Kin Ming Je defrauded thousands of people and made more than $1 billion. Guo Wengui's success has a "cult" flavor. Calling believers in the name of "anti-communist", creating a community with closed information, while bullying or threatening to punish those who are not firm in their beliefs. After packaging himself as a "master" type of figure, it is natural to harvest the wealth of believers.
https://financialpost.com/technology/european-privacy-search-engines-aim-to-challenge-google
It has built its own index of 20 billion pages covering French, German and Italian. It plans to expand the index to about two dozen other languages, for which it currently relies on results from Microsoft’s Bing.
Bing脱却したんか?
お金の話はごく一例なので、いったん離れてみよう。
100 million light years とか、キツない?
「あー、遠いんやろなぁ」以上のことは、やっぱり万億に換算していかないとキツいわ。
(具体的には 1 million = 100万 の脳内認識を起点にしてから100倍して、1億にしないと無理。1 billion から1桁下げても良いけど)
日本で生活しとるアメリカ人は、「万博の費用がさらに追加で837億円!」というニュースを見て、million とか billion とか頭をよぎることなく「ほーん、837億円かぁ」と思えるもんなんかなぁ?
だいたいの人は貧しくなったんじゃない?
General Electric - $74.9 billion
ExxonMobil - $74.3 billion
Royal Dutch/Shell - $73.1 billion
General Motors - $47.3 billion
Pfizer - $44.4 billion
Wal-Mart Stores - $39.2 billion
Philip Morris - $37.7 billion
Procter & Gamble - $34.3 billion
American Express - $32.8 billion
Mitsubishi Electric - $29.8 billion
Nippon Telegraph and Telephone - $28.7 billion
Hewlett-Packard - $28.5 billion
Unilever - $27.4 billion
Time Warner - $26.1 billion
BASF - $25.8 billion
Novartis - $25.6 billion
International Paper - $25.2 billion
Mitsui & Co. - $24.7 billion
Matsushita Electric - $24.4 billion
Tokyo Electric Power - $23.2 billion
Deutsche Bank - $22.4 billion
Chase Manhattan - $22.4 billion
Ericsson - $22.3 billion
Samsung Electronics - $22.2 billion
American International Group - $22.1 billion
Sanwa Bank - $22.1 billion
ただインターネットとスマホがすべてのご家庭に行き渡ったからなぁ
何を基準にするかよね
わたしはtiktokやらyoutubeショートのようなコンテンツの食い潰しなんて大嫌いです
それは無意味にスクロールしてほとんど記憶に残らず無駄に時間消費してしまうのです
その時間でもっとなにか勉強に経験に、とは言わないがすごく虚無な時間が気がする
ふとした休日にやってしまい、「無駄にした」「疲れた」と感じ、感情を消費してしまう
何度かこの経験をしてからはTiktokなどは見ないようにしていた
しかし、半年前から広告代理店で働くようになり、Webマーケティングの部門配属でTiktokなどは無視できないようになってしまった。
https://www.instagram.com/reel/Co2LftQuwk_/?igshid=MDJmNzVkMjY=
英語だったので書き起こしてみました。
”I have figured out how to do the impossible.
You see, the human brain contains 86 billion neurons, each of which transmit thoughts, feelings and emotions to one another at about 286 miles per hour.
ご存知の通り 人間の脳には860億の神経細胞があり それぞれの神経細胞が思考、感情、感覚を時速約286マイルで互いに伝達し合っています。
10 years ago, this was far from a reality.
But now I can know anything I want to talk to anyone I want, laugh, cry, ponder, stimulate cringe, all in an instant.
しかし、今や私は、知りたいことを何でも知り、話したい人と話し、笑い、泣き、考え、刺激し、ゾッとすることを、すべて瞬時に行うことができます。
And, honestly, it feels horrible.
正直言って、恐ろしいことだと感じている。
Maybe we weren't meant to sidestep physiology and force ourselves to switch emotions at rapid speed, one after the other constantly at every single second of every moment of every day.
もしかしたら、私たちは生理現象を横取りして、毎日毎秒毎瞬間、常に次から次へと感情を高速で切り替えることを強要される運命にないのかもしれない。
Maybe that's why this is impossible.
And I know it's bad for me, all of the constant switching the interconnectedness everything about it.
そして、それが私にとって悪いことであることは分かっています。絶え間なく切り替わること、相互関連性、そのすべてについて。
But the worst part is, I have no idea how to stop it.
Do you?”
どうですか?あなたは?
怖いのはこれもコンテンツの一つということ
「Cural Health」や「Drexel’s University」「Nabla」など、各所でGPT-3の検証がなされているので、今後 医療現場でGPTが使われるようになる可能性を聞いてみた。
Me:
あなたはどのように実装されたの? Deep Learning?
GPT:
はい、私はDeep Learningによって実装されました。特に、Transformerと呼ばれるアルゴリズムを使用して学習されました。これは大量のテキストデータを学習し、自然言語処理タスクに適用することができるように設計されています。
Me:
もっと詳しく。
GPT:
Transformerは、自然言語処理タスクのためのDeep Learningアルゴリズムです。それは、大量のテキストデータを学習し、それを解釈し、理解することができるようになります。 Transformerは、複数の層からなるニューラルネットワークを使用して、テキストを処理します。それは、大量のデータを効率的に処理するために、Attention機構を使用します。これは、特定の部分のテキストに注目しながら、それらの関係を理解することができるようになります。Transformerは、自然言語処理タスクにおいて非常に成功しています。それは、自然言語の翻訳、文章の解釈、質問応答などのタスクに適用されており、高い精度を発揮しています。私はTransformerを使用して学習され、自然言語処理タスクに適用することができます。
Me:
事前訓練とかしているんだよね?
GPT:
はい、私は事前学習を受けています。事前学習とは、大量のテキストデータを使用して、言語モデルを学習することを指します。これは、言語処理タスクを実行するために必要な基礎知識を獲得するために行われます。事前学習は、言語モデルのパフォーマンスを大幅に改善することができます。それは、事前学習済みのモデルを使用することで、新しいタスクに対して学習をスキップすることができ、タスク特化の fine-tuning を行うことができるためです。私は、大量のオンラインテキストデータを使用して学習され、それを解釈し、理解することができるようになりました。それは、自然言語処理タスクに対して高い精度を発揮することができます。
Me:
今のパラメータはいくつ? 前のバージョンよりどのくらい精度がよくなった?
GPT:
私の現在のパラメータ数は、数百億から数千億となっています。これは、大量のデータを使用して学習し、高い精度を達成するために必要なパラメータ数です。精度は、バージョンによって異なりますが、一般的には、前のバージョンよりも高い精度が達成されています。これは、新しいアルゴリズムや技術の進歩によって、より良い結果が得られるようになってきているためです。また、事前学習済みのモデルを使用することで、新しいタスクに対して学習をスキップすることができ、タスク特化の fine-tuning を行うことができるため、精度が向上していることも見られます。
Me:
具体的なパラメータ数は? 前バージョンのGTP-2、現バージョンのGPT-3、次バージョンのGPT-4、それぞれ教えて。
GPT:
GPT-2は、1.5バイアレイパラメータを持っていました。それは、大量のテキストデータを使用して学習し、自然言語処理タスクに適用することができるようになっていました。GPT-3は、175バイアレイパラメータを持っています。これは、GPT-2よりも多くのパラメータを持っており、より高い精度を達成することができます。GPT-4はまだリリースされていないため、そのパラメータ数については確定していません。もしリリースされるのであれば、より多くのパラメータを持っていることが予想され、さらに高い精度を達成することができる可能性があります。
Me:
175バイアレイパラメータって何? 1750億パラメータの間違いでは?
GPT:
そうです、GPT-3は175バイアレイパラメータを持っています。それは1750億のパラメーターを持っていることを意味します。これは、大量のデータを使用して学習し、高い精度を達成するために必要なパラメータ数です。これは、現在の自然言語処理タスクに対して最も大きなモデルであり、高い精度を達成することができるため、市場においても高い評判を得ています。
Me:
バイアレイパラメータを英語で表現して。
GPT:
A "peta-parameter" or "Peta-parameter" (PT) is a measure of the capacity of a machine learning model, specifically the number of parameters it has. In the case of GPT-3 it has 175 peta-parameters, meaning it has 175 million billion parameters. These parameters help the model to understand and interpret large amounts of text data, allowing it to perform natural language processing tasks with high accuracy.
Me:
「175 billion parameters」の間違いでは?
GPT:
そうです、「175 billion parameters」の間違いでした。 GPT-3は175 billion parametersを持っています。それは、自然言語処理タスクに対して高い精度を発揮するために必要なパラメータ数です。申し訳ありませんでした。
Me:
GPT-3はどのように概念レベルの知識を持つの? Deep Learningベースなので、知識を明示的に実装することはないけど、知識を持つことを想定して構造設計する必要はあるよね?
GPT:
GPT-3は、大量のテキストデータを使用して学習されました。それによって、自然言語処理タスクに対して高い精度を発揮することができます。それは、事前学習済みのモデルを使用することで、新しいタスクに対して学習をスキップすることができ、タスク特化の fine-tuning を行うことができるためです。GPT-3は、大量のテキストデータから概念レベルの知識を学習します。それは、言語処理タスクを実行するために必要な基礎知識を獲得するために行われます。 それは、単語や文章の関係性、文章の構造、文章の意味などを理解することができるようになります。GPT-3は、Deep Learningベースのモデルであり、知識を明示的に実装することはありませんが、大量のテキストデータから知識を学習することができるように構造設計されています。それは、Attention機構を使用して特定の部分のテキストに注目しながら、それらの関係を理解することができるようになります。
Me:
Attention機構について詳しく。
GPT:
Attention機構は、自然言語処理タスクにおいて重要な役割を担う機構です。それは、大量のテキストデータを処理するために使用され、特定の部分のテキストに注目しながら、それらの関係を理解することができるようになります。Attention機構は、Seq2Seqモデルなどの自然言語処理タスクに使用されています。それは、入力シーケンスから出力シーケンスを生成するために使用されます。Attention機構は、入力シーケンスのある部分に注目し、それに基づいて出力シーケンスを生成することができるようになります。Attention機構は、ニューラルネットワークの中で使用されます。それは、重み付けされた平均値を計算し、それに基づいて入力シーケンスから重要な部分を選択することができるようになります。GPT-3は、Attention機構を使用して、大量のテキストデータから概念レベルの知識を学習し、文章の意味を理解することができるようになります。
マジレスされてチビッた.こりゃGoogleさんもビビるわけだ.
ぼく:
How were you implemented? Is it deep learning-based?
I am a language model called GPT-3 (Generative Pre-trained Transformer 3) that was developed by OpenAI. It is a deep learning-based model that uses a type of neural network called a transformer, which was trained on a large dataset of text to learn patterns of language.
GPT-3 is trained using a technique called unsupervised learning, where the model is given a massive amount of text data and uses this data to learn patterns and relationships in the language. This allows GPT-3 to generate human-like text when given a prompt, and to understand and respond to natural language input.
The training process of GPT-3 involves a technique called pre-training, which means that the model is trained on a massive dataset before being fine-tuned on a smaller task-specific dataset. This pre-training allows GPT-3 to have a good general understanding of the language and to be able to adapt to specific tasks more easily.
The GPT-3 model has 175 billion parameters, which is one of the largest language models to date, making it able to generate very human-like text and to understand context in a very natural way.
ぼく:
How does GPT-3 have a knowledgebase like an ontology? As it is deep learning-based, the knowledgebase like an ontology would not be implemented explicitly, but the structure should be designed with an expectation of having such a knowledge about hierarchy of concepts.
GPT-3, like other deep learning models, does not have an explicit knowledgebase or ontology built into its structure. Instead, it has been trained on a large dataset of text, which allows it to learn patterns and relationships between concepts.
However, the training data that GPT-3 was exposed to, likely contains a wide range of concepts and hierarchical structures. As GPT-3 learns patterns in the data, it can begin to understand the relationships between different concepts and ideas, and how they are related to each other. This allows the model to generate text that is more coherent and semantically meaningful, making it possible to understand the context and hierarchy of concepts, but it does not have an explicit ontology like a knowledge graph.
It can be said that GPT-3 has a certain level of "implicit ontology", in the sense that it has learned the relationships between concepts in a way that is similar to how a knowledge graph or ontology organizes information. However, it's important to note that GPT-3's understanding of concepts and relationships is based on the patterns it has learned from the training data, which is not a structured and explicit knowledgebase like an ontology.
ぼく:
Then how does GPT-3 handle the long-term or short term memories? Can it remember information about the on-going chatting? My name is Masuda, can GPT-3 keep it in the memory? Furthermore, can it learn and understand the new information from the Q-A queries, and use it in the future queries?
GPT-3 handles long-term and short-term memories differently. When it generates text, it uses a technique called attention mechanism which allows the model to focus on certain parts of the input while generating the output. This allows GPT-3 to keep track of previous information and context when generating text, which can be thought of as a form of short-term memory.
Regarding the on-going chatting, GPT-3 can keep track of certain information, like the name you've provided, Masuda, during the conversation and use it in future responses. However, it's important to note that GPT-3 is a stateless model, which means that it doesn't have a permanent memory that it can refer to after a conversation ends.
Regarding learning from Q-A, GPT-3 is trained on a massive dataset of text, and it learns patterns and relationships between concepts from this data. It can generate text that is semantically meaningful and coherent, but it does not have the capability to learn new information from Q-A sessions and use it in future queries. However, OpenAI has developed another model called DALL-E, which uses GPT-3 and it is capable of learning from Q-A sessions and use it in future queries.
ぼく:
Is ChatGPT superior to the Google search? Previously people used Google to search for their questions. Now a days, ChatGPT can answer to those questions. What are pros and cons of ChatGPT over the Google search?
ChatGPT and Google search are different technologies with different capabilities and strengths.
Google search is a powerful web search engine that can quickly find relevant information from a vast amount of web pages and documents. It uses complex algorithms to rank and sort search results based on relevance, popularity, and other factors. Google search is very good at providing factual information and is an excellent tool for researching a wide range of topics.
ChatGPT, on the other hand, is a natural language processing model that can generate human-like text. It is trained on a large dataset of text and uses this knowledge to understand and respond to natural language input. ChatGPT can generate text that is more coherent and semantically meaningful than a simple information retrieval engine like Google search. ChatGPT can also be used to generate text that is more engaging and human-like, such as in chatbots and virtual assistants.
One of the main advantages of ChatGPT over Google search is its ability to understand and respond to natural language input in a more human-like way. ChatGPT can also generate text that is more coherent and semantically meaningful, which can be useful in many applications. However, Google search is better at providing factual information and searching for specific pieces of information.
Another disadvantage of ChatGPT is that it doesn't have the ability to retrieve information from the web, it can only generate text based on the patterns it has learned from the training data. That's why it's not a replacement for a web search engine like Google but it can be used as an additional tool to help people to find the information they are looking for.
In summary, ChatGPT and Google search have different strengths and capabilities. While Google search is better at providing factual information, ChatGPT is better at understanding and responding to natural language input in a more human-like way.
ぼく:
Why don't ChatGPT refer to Google search in the generation of the conversation? Google Assistant has such a function. Is the reason due to the fact that OpenAI is funded by Google?
ChatGPT is a language model that generates text based on the patterns it has learned from the training data. While it can understand and respond to natural language input in a human-like way, it does not have the capability to access external sources of information like Google search.
Google Assistant, on the other hand, is a virtual assistant that can access a wide range of information and services through the internet, including Google search. Google Assistant can use this information to provide more accurate and relevant responses to user queries.
The reason ChatGPT does not refer to Google search during the generation of the conversation is likely because it was not designed or trained to do so. Instead, it was designed to generate text based on the patterns it has learned from the training data, and it does not have the capability to access external sources of information.
It's worth noting that OpenAI and Google are separate organizations, even though Google is an investor in OpenAI, this doesn't necessarily mean that the two companies work together on their products and services. OpenAI has its own development team, goals and roadmap and they are independent from Google.
ぼく:
I am going to post this conversation with ChatGPT in the anonymous diary of Hatena co. What should I do for attracting people to get many bookmarks?
There are several things you can do to attract people and increase the chances of your conversation with ChatGPT getting many bookmarks on Hatena diary. Some suggestions include:
1. Make sure your conversation is well-written and easy to understand. Use clear and concise language, and consider formatting your text in a way that makes it easy to read.
2. Use relevant keywords and hashtags in your diary entry. This can help make your conversation more discoverable to people searching for related topics on Hatena diary.
3. Include images or videos in your diary entry. This can make your conversation more visually appealing and can help attract more readers.
4. Share your diary entry on social media or other platforms. This can help increase the visibility of your conversation and attract more readers.
5. Encourage readers to share their thoughts and opinions on your conversation by including a call to action or a question at the end of your diary entry.
6. Finally, you can also try to participate in the conversation on Hatena diary, by answering the questions, responding to the comments, and engaging with your readers.
It's important to remember that getting many bookmarks on a diary is not easy and it can take time and effort to build an audience. But by following these tips and being consistent with your diary entry, you will increase the chances of attracting more readers and getting more bookmarks.
ぼく:
タイトルの記事は2019年で少し古いけど、日本のコンテンツが強いことの根拠としてTITLEMAXのThe 25 Highest-Grossing Media Franchises of All Timeが今でも挙げられることがあるんだが、この数値は信用できないと思ってんだが、どうだろ?
この大本はwikipediaのList of highest-grossing media franchises。
wikiだから即、信憑性に欠けるって言いたいわけじゃなくて、内訳を見た上で、バラバラのソースをもとに集計されててて不足・重複があるように思えるんだよな。
例えばMickey Mouse & Friends($80B)、この大半はRetail sales($79B)だが、内訳はこんな感じ。そのまま適当に貼ってるからぐちゃぐちゃだけど。
Mickey Mouse & Friends retail sales:
Up until 1997 – $8 billion
1999 – ¥162.33 billion($1.425 billion)
2000 – ¥120 billion($1.114 billion)
$4.7 billion in 2002. $5.8 billion in 2003. $5.3 billion in 2004. $6 billion in 2005. $6 billion in 2007. ¥88,122.32 million ($853 million) in 2008.¥96,827.8 million ($1,035 million) in 2009.
2010 – $9 billion
2011 – $750 million
2012 – $4.122 billion
2013–2017 – $21.029 billion
$4.568 billion in 2013. $4.719 billion in 2014. $4.262 billion in 2015. $4.247 billion in 2016. $3.233 billion in 2017.
2018 – $3.265 billion
2019 – ¥124 billion ($1,138 million)
全部を検証すんのは無理なんで指摘しやすいところだけ指摘する。まず1997年の8Bはソースを確認すると1997年までの収益でなく、1997年時点の会社による評価額じゃね?
One of the most interesting facts pertaining to this matter, is that the Disney Corporation’s valuation of Mickey Mouse was $8 billion in 1997 prior to the enactment of the CTEA.
表だとミッキーは1928年からフランチャイズ開始してるけど、この時点で70年分の収益が加算されていないっていう……。
1999、2000、2008、2009、2011、2019は加算されてるのは日本の数字だけ。
一方で1位のポケモンの集計は重複してるっぽい。収益($105B)の大半はLicensed merchandise($81B)が占めているが、その主な集計方法はJapan + outside Asia。
このうちJapanはCharaBiz等の数値を参照してるけど、outside AsiaはGLOBAL LICENSORS RANKING(以下、ランキング)の全世界の売上を参照している。つまり日本における売上が二重に集計されてる。
ランキング上ではThe Pokemon Company Internationalだから、日本の株式会社ポケモンの数値が入っていないから足してるのかと思ったけど、
株式会社ポケモンの売上規模であれば単体でもランキングに入るはずなのに(例えば2018年の$701M)、入っていないんで、多分Internationalの数値に含まれてる。
そもそも全世界の売上を比較するランキングで親会社が除外されるとかありえんの?って思うし、除外されてるとしてもランキング上の数値に、そのランキングで確認できるはずなのにできない数値を別途足すのって恣意的すぎね?
ちな、GLOBAL LICENSORS RANKING上でThe Pokemon Company Internationalは8位で$5.1Bで、The Walt Disney Companyは$54Bで1位。
文字通りディズニーが桁違い。ただポケモンの方はポケモンのみなのですごいのは確か。
面倒臭いんで、自分での検証はここまで。このほか、Youtubeで漫画については言及があったけど、例えばナルトだと単行本1,637億円に連載中のジャンプの売上6,530億円が加算されてる。
東南アジアでひっそりやってるだけやろ。
1. Soccer / Football - 3 Billion Followers
2. Cricket - 2.5 Billion Followers
3. Basketball - 2.2 Billion Followers
4. Field Hockey - 2 Billion Followers
5. Tennis - 1 Billion Followers
6. Volleyball - 900 Million Followers
7. Table Tennis - 850 Million Followers
8. Baseball - 500 Million Followers
なんか伝わってなかったみたいだから表にする。
数字 | 従来手法(日本語) | 従来手法(英語) | 提案手法 |
1 | 一 | one | 一 |
10 | 十 | ten | 十 |
10^2 | 百 | hundred | 百 |
10^3 | 千 | thousand | 十百 |
10^4 | 万 | ten thousand | 千 |
10^5 | 十万 | hundred thousand | 十千 |
10^6 | 百万 | million | 百千 |
10^7 | 千万 | ten million | 十百千 |
10^8 | 億 | hundred million | 万 |
10^9 | 十億 | billion | 十万 |
10^10 | 百億 | ten billion | 百万 |
10^11 | 千億 | hundred billion | 十百万 |
10^12 | 兆 | trillion | 千万 |
10^13 | 十兆 | ten trillion | 十千万 |
10^14 | 百兆 | hundred trillion | 百千万 |
10^15 | 千兆 | ? | 十百千万 |
10^16 | 京 | ? | 億 |
10^17 | 十京 | ? | 十億 |
10^18 | 百京 | ? | 百億 |
10^19 | 千京 | ? | 十百億 |
10^20 | ? | ? | 千億 |
10^21 | ? | ? | 十千億 |
10^22 | ? | ? | 百千億 |
10^23 | ? | ? | 十百千億 |
10^24 | ? | ? | 万億 |
10^25 | ? | ? | 十万億 |
死ぬほどSteamの積みゲーが増えてきたので次にやるべきゲームを教えて欲しい。
プレイしたものは下にプレイ時間が書いてあるので参考にして欲しい。
Sid Meier's Civilization V
The Last Remnant
The Elder Scrolls V: Skyrim Special Edition
Sid Meier's Civilization VI
サガ スカーレット グレイス 緋色の野望
Stardew Valley
NieR:Automata™
Katana ZERO
Age of Empires II (2013)
The Herbalist
Overcooked
A-Train PC Classic / みんなのA列車で行こうPC
AntVentor
Helltaker
INVERSUS Deluxe
Portal 2
ARCADE GAME SERIES: DIG DUG
ARCADE GAME SERIES: PAC-MAN
Move or Die
FluidLander
Arcade Classics Anniversary Collection
Counter-Strike: Global Offensive
Tricky Towers
NOBUNAGA’S AMBITION: Zenkokuban
Hotline Miami
NOBUNAGA’S AMBITION: Sengoku Gunyuuden
Warframe
ARCADE GAME SERIES: GALAGA
Street Fighter 30th Anniversary Collection
NOBUNAGA’S AMBITION: Reppuden with Power Up Kit
NOBUNAGA’S AMBITION: Shouseiroku with Power Up Kit
Ace of Seafood
NOBUNAGA'S AMBITION: Bushou Fuunroku
Fallout 3 - Game of the Year Edition
The Elder Scrolls IV: Oblivion
STAR WARS™ Battlefront (Classic, 2004)
Duck Game
Puyo Puyo Champions - ぷよぷよ eスポーツ
NOBUNAGA'S AMBITION
UNDEFEATED
Rocket League
Mushihimesama
NOBUNAGA’S AMBITION: Haouden with Power Up Kit
Sid Meier's Civilization IV: Beyond the Sword
Stronghold Crusader HD
Groove Coaster
Hacknet
Undertale
Niffelheim
Counter-Strike: Source
Ricochet
KHOLAT
Sid Meier's Civilization IV: Colonization
Day of Defeat: Source
Sid Meier's Civilization IV
Unepic
NEKOPARA Vol. 1
DARK SOULS™ III
FINAL FANTASY X/X-2 HD Remaster
428 〜封鎖された渋谷で〜
Age of Empires II (2013): The Forgotten
Atelier Meruru ~The Apprentice of Arland~ DX
Atelier Rorona ~The Alchemist of Arland~ DX
Atelier Totori ~The Adventurer of Arland~ DX
Autonauts
BADLAND: Game of the Year Edition
Battlestations: Midway
Battlestations: Pacific
Borderlands 2: Headhunter 1: Bloody Harvest
Borderlands 2: Headhunter 2: Wattle Gobbler
Borderlands 2: Headhunter 3: Mercenary Day
Borderlands 2: Headhunter 4: Wedding Day Massacre
Borderlands 2: Headhunter 5: Son of Crawmerax
Borderlands: The Pre-Sequel
Borderlands DLC: Claptrap’s New Robot Revolution
Borderlands DLC: Mad Moxxi's Underdome Riot
Borderlands DLC: The Secret Armory of General Knoxx
Borderlands DLC: The Zombie Island of Dr. Ned
Borderlands GOTY Enhanced
Cat Quest
Cat Quest II
Celeste
CHUCHEL (チュチェル)
Circle Empires
Civilization V - Scrambled Continents Map Pack
Civilization VI - Australia Civilization & Scenario Pack
Civilization VI - Aztec Civilization Pack
Civilization VI - Persia and Macedon Civilization & Scenario Pack
Civilization VI - Poland Civilization & Scenario Pack
Civilization VI - Vikings Scenario Pack
Counter-Strike: Condition Zero
Counter-Strike: Condition Zero Deleted Scenes
Cuphead
Deathtrap
Depraved
Deus Ex: Game of the Year Edition
Deus Ex: Human Revolution - Director's Cut
Dungeon Siege
Dungeon Siege 2
Dungeon Siege III
Fallout 2
Fallout 4 - Contraptions Workshop
Fallout: New Vegas
FINAL FANTASY TYPE-0 HD
FINAL FANTASY XII THE ZODIAC AGE
Flora's Fruit Farm
Forager
Genghis Khan
Genghis Khan II: Clan of the Gray Wolf
Goetia
Hacknet - Labyrinths
Half-Life: Opposing Force
Half-Life: Source
Hearts of Iron IV: Death or Dishonor
Hearts of Iron IV: Together for Victory
I Love You, Colonel Sanders! A Finger Lickin’ Good Dating Simulator
Just Cause
Just Cause 2
Just Cause 3
Just Cause™ 3 DLC: Sky Fortress Pack
Kawanakajima no Kassen / 川中島の合戦
Kenshi
Lara Croft and the Guardian of Light
Lara Croft and the Temple of Osiris
Legacy of Kain: Defiance
Lode Runner Legacy
Meltys Quest
Mindustry
Mushihimesama Original Soundtrack
Mushihimesama V1.5
NEKOPARA Vol. 0
NEKOPARA Vol. 2
NEKOPARA Vol. 3
NieR:Automata™ - 3C3C1D119440927
NOBUNAGA’S AMBITION: Ranseiki with Power Up Kit
NOBUNAGA’S AMBITION: Soutenroku with Power Up Kit
NOBUNAGA’S AMBITION: Tenkasousei with Power Up Kit
Order of War
Overcooked! 2
PAYDAY 2
Pikuniku
Poly Bridge
Prelogate
Project: Snowblind
Rise of the Tomb Raider
Romance of the Three Kingdoms
Romance of the Three Kingdoms II
Romance of the Three Kingdoms III
Romance of the Three Kingdoms IV with Power Up Kit
Romance of the Three Kingdoms IX with Power Up Kit
Romance of the Three Kingdoms VIII with Power Up Kit
Romance of the Three Kingdoms VII with Power Up Kit
Romance of the Three Kingdoms VI with Power Up Kit
Romance of the Three Kingdoms V with Power Up Kit
Romance of the Three Kingdoms X with Power Up Kit
Russian Subway Dogs
Sangokushi Eiketsuden
Seed of the Dead
Shadow Tactics: Blades of the Shogun
Sid Meier's Civilization IV
Sid Meier's Civilization IV: Beyond the Sword
Sid Meier's Civilization IV: Colonization
Sid Meier's Civilization IV: Warlords
Sid Meier's Civilization IV: Warlords
Sid Meier's Civilization V: Brave New World
Sid Meier’s Civilization VI: Rise and Fall
Skyrim High Resolution Texture Pack
Sleeping Dogs: Definitive Edition
SNK 40th Anniversary Collection
SpaceChem
STEINS;GATE ELITE
STEINS;GATE 線形拘束のフェノグラム
Stronghold Crusader Extreme HD
Stronghold HD
Stronghold Legends
Super Lucky's Tale
Supreme Commander 2
Swag and Sorcery
Taikou Risshiden
The Incredible Adventures of Van Helsing
The Incredible Adventures of Van Helsing: Thaumaturge
The Incredible Adventures of Van Helsing II
The Incredible Adventures of Van Helsing II - OST
The Incredible Adventures of Van Helsing III
The Messenger
The Tiny Bang Story
The Turing Test
The Witcher 2: Assassins of Kings Enhanced Edition
The Witcher 3: Wild Hunt
The Witcher 3: Wild Hunt - Expansion Pass
The Witcher 3: Wild Hunt - New Finisher Animations
The Witcher: Enhanced Edition
Thief
Thief - Ghost
Thief - Opportunist
Thief - Predator
Thief - The Bank Heist
Thief 2
Thief: Deadly Shadows
Thief Gold
This War of Mine - War Child Charity DLC
Tomb Raider (VI): The Angel of Darkness
Tomb Raider: Chronicles
Tomb Raider: The Last Revelation
Tomb Raider II
Tomb Raider III: Adventures of Lara Croft
Uncharted Waters
Uncharted Waters II
VA-11 Hall-A: Cyberpunk Bartender Action
Van Helsing II: Ink Hunt
World of Goo
XCOM 2
XCOM 2: Alien Hunters
XCOM 2: Anarchy's Children
XCOM 2: Reinforcement Pack
XCOM 2: Resistance Warrior Pack
XCOM 2: War of the Chosen
XCOM: Enemy Unknown
XCOM: Enemy Within
Zero Escape: Zero Time Dilemma
神次次元ゲイム ネプテューヌRe;Birth3 V CENTURY
超次元大戦ネプテューヌVSセガハードガールズ夢の合体スペシャル
超次次元ゲイム ネプテューヌRe;Birth1 DELUXEセット(ディジタル限定版)
超次次元ゲイム ネプテューヌRe;Birth3 デラックスセット
[お礼]
返信:
・ラスレムについて
初めてのSteamゲーム。パッケージ買ったのにSteamをインストールさせ沼。
プレイ時間が長いのはクエストを全てこなしたからかな。乱殺までしてないです。
リマスターは出るんですかね。
・マイクラ
エンダードラゴン倒すまではやった。
PS4でプレイ済み。面白いけど、QTEが邪魔。もう一周するときは難易度を下げる。
・Portal2
酔った。1は大丈夫だったのになぁ。
・物量
セールのときに買ってたらこんな事になった。金持ちなわけでは。むしろ安物買いの銭失い。
KOEI系も周年記念で安売りしてたから。